<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<rfc xmlns:xi="http://www.w3.org/2001/XInclude"
     version="3"
     docName="draft-condrey-rats-pop-01"
     ipr="trust200902"
     category="std"
     consensus="true"
     submissionType="IETF"
     sortRefs="true"
     symRefs="true"
     tocInclude="true"
     tocDepth="4">

  <front>
    <title abbrev="Proof of Process">Proof of Process: An Evidence Framework for Digital Authorship Attestation</title>
    <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-01"/>
    <author fullname="David Condrey" initials="D." surname="Condrey">
      <organization abbrev="Writerslogic">Writerslogic Inc</organization>
      <address>
        <postal>
          <city>San Diego, California</city>
          <country>United States</country>
        </postal>
        <email>david@writerslogic.com</email>
        <uri>https://writerslogic.com</uri>
      </address>
    </author>
    <date year="2026" month="February" day="11"/>

    <area>Security</area>
    <workgroup>Remote ATtestation procedureS</workgroup>

    <keyword>attestation</keyword>
    <keyword>evidence</keyword>
    <keyword>authorship</keyword>
    <keyword>RATS</keyword>
    <keyword>behavioral</keyword>
    <keyword>VDF</keyword>
    <keyword>verifiable delay function</keyword>
    <keyword>provenance</keyword>
    <keyword>digital authorship</keyword>

    <abstract>
      <t>
        This document specifies the Proof of Process (PoP) Evidence Framework, a specialized profile of Remote Attestation Procedures (RATS) designed to validate the provenance of effort in digital authorship. Unlike traditional provenance, which tracks file custody, PoP attests to the continuous, human-driven process of creation.
      </t>
      <t>
        The framework defines a cryptographic mechanism for generating Evidence Packets containing Verifiable Delay Functions (VDFs) to enforce temporal monotonicity and Jitter Seals to bind behavioral entropy (motor-signal randomness) to the document state. These mechanisms allow a Verifier to cryptographically distinguish between human-generated keystrokes, algorithmic generation, and copy-paste operations. Crucially, this verification relies on statistical process metrics and cryptographic binding, enabling authorship attestation without disclosing the semantic content of the document, thereby preserving privacy by design.
      </t>
    </abstract>

    <note removeInRFC="true">
      <name>About This Document</name>
      <t>
        Status of this Memo: This Internet-Draft is submitted in full conformance
        with the provisions of BCP 78 and BCP 79.
      </t>
      <t>
        Internet-Drafts are working documents of the Internet Engineering Task
        Force (IETF). Note that other groups may also distribute working documents
        as Internet-Drafts. The list of current Internet-Drafts is at
        <eref target="https://datatracker.ietf.org/drafts/current/"/>.
      </t>
      <t>
        Internet-Drafts are draft documents valid for a maximum of six months
        and may be updated, replaced, or obsoleted by other documents at any
        time. It is inappropriate to use Internet-Drafts as reference material
        or to cite them other than as "work in progress."
      </t>
    </note>

    <note>
      <name>Copyright Notice</name>
      <t>
        Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.
      </t>
    </note>

  </front>


  <middle>
    <section anchor="introduction">
      <name>Introduction</name>
      <t>
        In the Remote Attestation Procedures (RATS) architecture [RFC9334], "Evidence" is typically a snapshot of system state (e.g., firmware measurements) at a single point in time. However, verifying digital authorship requires attesting to a continuous process rather than a static state. Current mechanisms like digital signatures prove consent, and timestamps (RFC 3161) prove existence, but neither can attest to the provenance of effort—the specific expenditure of time, human attention, and mechanical interaction required to create a document.
      </t>
      <t>
        This document specifies the Proof of Process (PoP) Evidence Framework, a specialized RATS profile for generating tamper-evident, non-repudiable evidence of an authoring session. It introduces Verifiable Delay Functions (VDFs) to enforce temporal monotonicity (preventing backdating) and Jitter Seals to bind behavioral entropy (human motor-signal randomness) to the document's evolution.
      </t>
      <t>
        By entangling content hashes with these physical and behavioral constraints, this protocol enables an Attester to generate an Evidence Packet (.pop) that cryptographically distinguishes between human generation, algorithmic generation, and bulk mechanical insertion (paste operations), without requiring privacy-invasive surveillance or revealing the document's semantic content.
      </t>
    </section>
    
    <section anchor="claims-and-non-claims">
      <name>Claims and Non-Claims</name>
      <t>This section is normative. Implementations and Verifier policies MUST distinguish between cryptographic assertions (facts proven by the protocol) and inferential judgements (probabilistic assessments).</t>
      <section anchor="hard-claims">
        <name>Cryptographic Assertions (Hard Claims)</name>
        <t>
          The Protocol guarantees the following properties relying solely on cryptographic primitives (SHA-256, VDF, HMAC):
        </t>
        <ul>
          <li>Temporal Ordering: Checkpoint N was created strictly after Checkpoint N−1.</li>
          <li>Minimum Effort Cost: The time spent generating the Evidence Chain is ≥ the sum of the VDF difficulties, establishing a lower bound on the "cost of forgery" in wall-clock time.</li>
          <li>Chain Integrity: The document state at Checkpoint N is the sole parent of Checkpoint N+1; no history has been inserted or deleted without breaking the hash chain.</li>
          <li>Entropy Binding: The timing data recorded in the evidence was captured prior to the computation of the subsequent VDF proof, preventing "look-ahead" or pre-computation attacks.</li>
        </ul>
      </section>
      <section anchor="soft-claims">
        <name>Behavioral Inferences (Soft Claims)</name>
        <t>
          Based on the analysis of the authenticated Evidence, a Verifier MAY infer:
        </t>
        <ul>
          <li>Source Consistency: The statistical likelihood that the input stream (keystroke dynamics) belongs to a single continuous actor.</li>
          <li>Anomaly Detection: The presence of discontinuities (e.g., sudden changes in typing rhythm) that correlate with tool usage or copy-paste operations.</li>
        </ul>
      </section>
      <section anchor="non-claims">
        <name>Excluded Claims (Non-Claims)</name>
        <t>This protocol explicitly does NOT support the following claims:</t>
        <ul>
          <li>"Human vs. AI" Classification: The protocol measures signal characteristics (entropy, rhythm), not cognitive origin. A high-entropy signal is "consistent with human input," not "proven human thought."</li>
          <li>"Cheating" or "Plagiarism": These are policy judgements, not technical facts. The protocol reports events (e.g., "large text block inserted"); the Relying Party determines if this constitutes a policy violation.</li>
          <li>Identity Attribution: While the evidence binds to a signing key, it does not inherently bind to a specific legal identity unless combined with external PKI or biometric identity assertions.</li>
        </ul>
      </section>
    </section>

    <section anchor="problem-statement">
      <name>Problem Statement</name>
      <t>
        Digital documents lack creation-process provenance. COSE <xref target="RFC9052"/> signatures prove key possession; RFC 3161 <xref target="RFC3161"/> timestamps prove existence-but neither reveals <em>how</em> the document evolved.
      </t>

      <t>
        Existing approaches fail modern needs:
      </t>
      <ul>
        <li>Surveillance (screen/keystroke logging): Privacy-violating, requires third-party trust, unverifiable without archives.</li>
        <li>Content analysis (stylometry/AI detectors): Probabilistic, adversarial-vulnerable, product-only (no process).</li>
      </ul>

      <t>
        Required traits: privacy-preserving (hash-only, SHA-256 <xref target="RFC6234"/>), independently verifiable (self-contained proofs), tamper-evident (hash/HMAC <xref target="RFC2104"/>/VDF chains), process-documenting (evolution, not contents).
      </t>

      <t>Use cases: academic integrity (AI sophistication), legal provenance, creative attribution, professional standards.</t>
    </section>

    <section anchor="scope">
      <name>Scope</name>

      <section anchor="what-this-specifies">
        <name>What This Specification Defines</name>
        <ul>
          <li>Evidence format (.pop): Merkle trees (SHA-256), entropy bindings, VDF proofs <xref target="Pietrzak2019"/> <xref target="Wesolowski2019"/> (CBOR <xref target="RFC8949"/>, tag 1347571280).</li>
          <li>Result format (.war): Verifier appraisals (COSE, EAT <xref target="RFC9711"/>, tag 1463894560).</li>
          <li>Checkpoint structure: Content hashes (SHA-256 <xref target="RFC6234"/>), timing proofs, behavioral summaries.</li>
          <li>Verification procedures: Self-contained, optional RFC 3161 anchors.</li>
          <li>Claim taxonomy: Chain-verifiable vs. monitoring-dependent (CDDL <xref target="RFC8610"/>).</li>
        </ul>
      </section>

      <section anchor="what-this-does-not-specify">
        <name>What This Specification Does NOT Define</name>
        <ul>
          <li>Content analysis: No stylometry/semantics (hash-only, SHA-256).</li>
          <li>Author ID: No person claims (key-bound via COSE <xref target="RFC9052"/>).</li>
          <li>Intent/cognition: No mental-state inference.</li>
          <li>AI classification: Process evidence only; policy-based interpretation.</li>
          <li>Surveillance: No capture/logging/monitoring (timing histograms only).</li>
        </ul>

        <t>These exclusions enable privacy-by-construction in the RATS <xref target="RFC9334"/> profile.</t>
      </section>

      <section anchor="rats-relationship">
        <name>Relationship to RATS</name>
        <t>RATS roles:</t>
        <dl>
          <dt>Attester:</dt>
          <dd>witnessd-core (local .pop production: Merkle/SHA-256, VDF, entropy).</dd>
          <dt>Verifier:</dt>
          <dd>Parses/appraises .pop -> signed .war (COSE).</dd>
          <dt>Relying Party:</dt>
          <dd>Consumes .war (institutions/publishers/legal).</dd>
        </dl>

        <t>Extensions: HMAC-SHA256 <xref target="RFC2104"/> entropy; VDFs for sequential time (relative + RFC 3161 <xref target="RFC3161"/> absolute).</t>
      </section>
    </section>

    <section anchor="design-goals">
      <name>Design Goals</name>
      <t>Four principles guide this RATS profile (SHA-256, COSE, HMAC, CBOR/CDDL, VDFs, RFC 3161):</t>

      <section anchor="privacy-by-construction">
        <name>Privacy by Construction</name>
        <t>Structural enforcement (CBOR/CDDL): No content (SHA-256 <xref target="RFC6234"/> hashes only); no keystrokes (ms intervals, histogrammed); no visuals; aggregates prevent reconstruction. Schema violations impossible.</t>
      </section>

      <section anchor="zero-trust">
        <name>Zero Trust</name>
        <t>RATS aligned: Local generation (SHA-256, VDF, HMAC, COSE); self-contained CBOR verification (optional RFC 3161 <xref target="RFC3161"/>); multi-Verifier adversarial appraisal (CDDL schemas).</t>
      </section>

      <section anchor="evidence-over-inference">
        <name>Evidence Over Inference</name>
        <t>CBOR facts (SHA-256/HMAC/VDF traceable); claims classified (computationally-bound vs. monitoring-dependent, CDDL ae-trust-basis); COSE results document verification (entropy/VDF/TPM <xref target="TPM2.0"/>/RFC 3161 factors); no authorship/intent/authenticity absolutes-Relying Party policy (EAT <xref target="RFC9711"/>).</t>
      </section>

      <section anchor="cost-asymmetric-forgery">
        <name>Cost-Asymmetric Forgery</name>
        <t>VDFs enforce sequential time; SHA-256 entropy commits irrecoverable timings; HMAC chains cascade invalidation. Selective forgery recomputes downstream VDFs (non-parallel). <xref target="forgery-cost-bounds"/> quantifies (economics > value). <em>Forgery possible but costly</em>-complements SHA-256/HMAC/COSE.</t>
      </section>
    </section>

    <section anchor="terminology">
      <name>Terminology</name>
      <t>BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> applies. PPPP avoids PPP (RFC 1661)/PoP (RFC 5280) conflicts.</t>

      <t>Key terms (CBOR, SHA-256, HMAC, COSE, VDFs):</t>

      <dl>
        <dt>PPPP Evidence (.pop):</dt>
        <dd><xref target="RFC9334"/> Attester artifact: Merkle trees (SHA-256), HMAC entropy, VDFs (CBOR tag 1347571280, hex 0x50505020, ASCII "PPPP"; CDDL <xref target="RFC8610"/>). Raw metrics (linearity, edits, fatigue, spectral) uninterpreted.</dd>
        <dt>PPPP Result (.war):</dt>
        <dd>Verifier Attestation Result (COSE, CBOR tag 1463894560, hex 0x57415220, ASCII "WAR "). Policy-based source consistency; varying Verifier outputs.</dd>
        <dt>Residency:</dt>
        <dd>Hardware origin (software -> TPM 2.0 <xref target="TPM2.0"/>/Enclave).</dd>
        <dt>Sequence:</dt>
        <dd>VDF min-time (non-parallel).</dd>
        <dt>Behavioral Consistency:</dt>
        <dd>Unified process stats (timing/edit evolution). Histograms privacy-protect raw intervals.</dd>
        <dt>SA-VDF:</dt>
        <dd>Pietrzak VDF HMAC hardware-bound (no fast migration).</dd>
      </dl>
    </section>

    <section anchor="document-structure">
      <name>Document Structure</name>

      <t>Builds on RATS: CBOR/CDDL, SHA-256/COSE verification.</t>

      <ul>
        <li><xref target="evidence-model"/>: Architecture, RATS roles, formats (tags 1347571280/1463894560).</li>
        <li><xref target="jitter-seal"/>: HMAC-SHA256 entropy binding.</li>
        <li><xref target="vdf-mechanisms"/>: VDFs temporal proofs.</li>
        <li><xref target="absence-proofs"/>: Claims (SHA-256/HMAC bound vs. monitoring).</li>
        <li><xref target="forgery-cost-bounds"/>: VDF economics.</li>
        <li><xref target="security-considerations"/>: Threats/mitigations.</li>
        <li><xref target="privacy-considerations"/>: Behavioral handling.</li>
        <li><xref target="iana-considerations"/>: Tags/EAT/media types.</li>
      </ul>

      <t>Appendices: CDDL schemas, SHA-256 vectors, guidance (RATS Attesters/Verifiers).</t>

      <t>
        Companion documents: <xref target="I-D.condrey-rats-pop-protocol"/> (transcript format),
        <xref target="I-D.condrey-rats-pop-schema"/> (CDDL schema),
        <xref target="I-D.condrey-rats-pop-examples"/> (examples and test vectors).
      </t>
    </section>

    <section anchor="conventions">
    <name>Conventions and Definitions</name>

    <section anchor="domain-separation">
      <name>Domain Separation Constants</name>
      <t>
        To prevent cross-protocol attacks, all HMAC and KDF operations MUST use
        explicit domain separation labels. The following constants are defined:
      </t>
      <ul>
        <li>`DST_JITTER`: "witnessd-jitter-binding-v1"</li>
        <li>`DST_CHAIN`: "witnessd-chain-mac-v1"</li>
        <li>`DST_CLOCK`: "witnessd-entropic-clock-v1"</li>
        <li>`DST_LINK`: "witnessd-link-token-v1"</li>
      </ul>
    </section>

    <section anchor="cddl-notation">
      <name>CDDL Notation</name>

      <t>
        Data structures in this architecture document are specified using the Concise Data Definition Language (CDDL) <xref target="RFC8610"/>, a notation by which CBOR <xref target="RFC8949"/> and JSON data structures may be expressed with precision and clarity, ensuring that implementers have unambiguous guidance for encoding and decoding Evidence Packets and Attestation Results. The normative CDDL definitions appear inline in the relevant sections, providing immediate context for the structures being described, and a complete consolidated schema is afforded in the appendices for implementers who require a single authoritative reference. The CDDL notation is used throughout this specification to define structures including checkpoints with SHA-256 <xref target="RFC6234"/> hash bindings, jitter-binding structures with HMAC <xref target="RFC2104"/> authentication, VDF proofs <xref target="Pietrzak2019"/> <xref target="Wesolowski2019"/>, and COSE <xref target="RFC9052"/> signatures, with all type definitions following the conventions established in RFC 8610.
      </t>
    </section>

    <section anchor="intro-cbor-encoding">
      <name>CBOR Encoding</name>

      <t>
        CBOR encoding per RFC 8949 is used by both Evidence Packets and Attestation Results, providing efficient binary encoding with support for semantic tags and extensibility that is well-suited for the compact representation of cryptographic evidence including SHA-256 hashes, HMAC bindings, VDF proofs, and COSE signatures. Semantic tags for type identification are employed to enable format detection without external metadata: Evidence Packets use the PPPP tag (1347571280) and Attestation Results use the WAR tag (1463894560), as defined in <xref target="terminology"/>. Integer keys in the range 1-99 are reserved for core protocol fields defined by this specification to minimize encoding size, while string keys are used for vendor extensions and application-specific fields that extend beyond the base CDDL schema. Deterministic encoding as specified in RFC 8949 Section 4.2 is RECOMMENDED for signature verification, ensuring that the same logical structure always produces identical byte sequences when computing SHA-256 hashes or verifying COSE signatures, with map keys sorted in bytewise lexicographic order, integers encoded in minimal representation, and floating-point values canonicalized.
      </t>
    </section>

    <section anchor="cose-signatures">
      <name>COSE Signatures</name>

      <t>
        COSE (CBOR Object Signing and Encryption) per RFC 9052 is used for cryptographic signatures throughout this specification, providing a standardized mechanism for authenticating Evidence Packets and Attestation Results within the CBOR encoding framework. Single-signer signatures suitable for Evidence and Attestation Result authentication are afforded by the COSE_Sign1 structure defined in RFC 9052, which includes a protected header containing the algorithm identifier, an unprotected header for optional metadata, and the signature bytes computed over the CBOR encoded payload. EdDSA with Ed25519 is RECOMMENDED for new implementations due to its performance characteristics (fast signing and verification), resistance to timing attacks through constant-time implementation, and compact signature size (64 bytes), while ECDSA with P-256 as defined in RFC 9052 is supported for compatibility with existing PKI infrastructures and hardware security modules including TPM 2.0 <xref target="TPM2.0"/>. The algorithm selection is indicated within the COSE protected header using registered algorithm identifiers, allowing Verifiers to determine the appropriate verification procedure without external negotiation.
      </t>
    </section>

    <section anchor="eat-tokens">
      <name>EAT Tokens</name>

      <t>
        An Entity Attestation Token (EAT) profile per RFC 9711 <xref target="RFC9711"/> is delineated by this architecture document, extending the RATS <xref target="RFC9334"/> attestation framework with domain-specific claims for behavioral evidence and process documentation. A framework for attestation claims with support for custom claim types is afforded by EAT, making possible the expression of Proof of Process claims including forensic-assessment verdicts, presence-score values, evidence-tier levels, and AI-composite-scores within a standardized structure encoded in CBOR and signed with COSE. The EAT profile URI for Proof of Process evidence is https://example.com/rats/eat/profile/pop/1.0, with IANA registration to be requested upon working group adoption as detailed in <xref target="iana-considerations"/>. Custom EAT claims proposed for IANA registration extend the standard EAT claim set with claims specific to behavioral evidence (pop-presence-score, pop-ai-composite-score), temporal evidence (VDF duration bounds), and process documentation (segment counts, entropy thresholds), enabling interoperability between RATS implementations that support this profile.
      </t>
    </section>

    <section anchor="hash-notation">
      <name>Hash Function Notation</name>

      <t>
        The following notation for cryptographic hash functions is used throughout this architecture document, with all hash operations conforming to the algorithms specified in RFC 6234 unless otherwise indicated: H(x) denotes the SHA-256 hash of input x, producing a 256-bit (32-byte) output that serves as the default hash algorithm for content hashes, segment hashes, and entropy commitments; H^n(x) denotes n iterations of hash function H as used in iterated-hash VDF constructions; and HMAC(k, m) denotes HMAC-SHA256 per RFC 2104 with key k and message m, used for binding operations including the chain-mac and jitter binding-mac. SHA-256 is the RECOMMENDED hash algorithm for all operations, being widely implemented across platforms (including hardware acceleration in modern processors), well-analyzed by the cryptographic community, and resistant to known cryptanalytic attacks including collision, preimage, and second-preimage attacks. Implementations MAY support SHA3-256 for algorithm agility as indicated in the CDDL hash-algorithm enumeration, particularly in environments where resistance to potential future attacks on the SHA-2 family is prioritized or where regulatory requirements mandate SHA-3 support; when SHA3-256 is used, the HMAC construction remains valid as HMAC is hash-function-agnostic.
      </t>
    </section>
  </section>


    <!-- Section 2: Evidence Model -->
    <section anchor="evidence-model">
      <name>Evidence Model</name>

      <t>
        In this section, the top-level architecture of the witnessd Proof of Process evidence model is delineated, with the design following the RATS (Remote ATtestation procedureS) architecture <xref target="RFC9334"/> while introducing domain-specific extensions for behavioral evidence encoded in CBOR <xref target="RFC8949"/>, cryptographic proofs computed using SHA-256 <xref target="RFC6234"/> and HMAC <xref target="RFC2104"/>, temporal ordering via VDFs <xref target="Pietrzak2019"/> <xref target="Wesolowski2019"/>, and process documentation structured according to CDDL <xref target="RFC8610"/> schemas. Both the structural components and their relationships are described, establishing the foundation upon which subsequent sections build, with particular attention to the cryptographic bindings that ensure tamper-evidence, the COSE <xref target="RFC9052"/> signatures that provide authentication, and the EAT <xref target="RFC9711"/> profile that enables interoperability with other RATS implementations.
      </t>

      <section anchor="rats-architecture-mapping">
        <name>RATS Architecture Mapping</name>

        <t>
          A RATS profile is implemented by this specification with the following role mappings that establish the correspondence between RATS entities and Proof of Process components: the witnessd-core library acts as Attester in the RATS model, producing Evidence Packets (.pop files) encoded in CBOR with semantic tag 1347571280, containing segment-based Merkle trees with SHA-256 hash linkage, VDF proofs, and jitter bindings authenticated via HMAC; verification implementations act as Verifiers in the RATS model, parsing CBOR encoded Evidence Packets per the CDDL schema and producing Attestation Results (.war files) signed with COSE; and consuming entities (academic institutions, publishers, legal systems) act as Relying Parties in the RATS model, interpreting the EAT claims in Attestation Results to make trust decisions. Evidence is generated locally on the Attester device without network dependency, with all cryptographic operations including SHA-256 hashing, VDF computation, and COSE signing performed using only local resources. Verification requires only the CBOR encoded Evidence packet itself, cryptographic hashes computed via SHA-256 are contained in Evidence rather than document content, and behavioral signals are aggregated into histograms before inclusion, affording a privacy-preserving attestation mechanism that requires no trusted infrastructure beyond the Attesting Environment and optional external anchors such as RFC 3161 timestamps.
        </t>
      </section>

      <section anchor="evidence-flow">
        <name>Evidence Flow</name>
        <t>
      PPPP operates in the RATS passport model:
      the Attester generates Evidence locally without network dependency,
      and Evidence is conveyed to the Verifier out of band for deferred
      appraisal. No real-time interaction between Attester and Verifier
      is required for evidence generation.
    </t>
    <t>
      The evidence flow proceeds as follows:
    </t>
    <ol>
      <li>The Attesting Environment runs locally alongside the authoring
      tool, capturing edit operations, timing intervals, and document
      state transitions as they occur.</li>
      <li>At each checkpoint, the Attesting Environment computes a content
      hash (SHA-256), commits behavioral entropy
      via HMAC, and computes a VDF temporal proof
      binding content, timing, and previous
      checkpoint state into a chain.</li>
      <li>On session completion, the Attesting Environment packages all
      checkpoints into a signed Evidence Packet (.pop) using COSE.</li>
      <li>The Evidence Packet is conveyed to a Verifier at a time
      determined by the author or Relying Party - potentially minutes,
      days, or months after creation.</li>
      <li>The Verifier independently appraises the Evidence Packet,
      producing an Attestation Result (.war) documenting what was
      verified, with confidence scores and caveats.</li>
    </ol>
    <t>
      When a Relying Party requires proof of freshness, an OPTIONAL
      verifier-provided nonce MAY be incorporated into the Evidence
      Packet's final signature. This is the only interactive element
      in the protocol and is not required for evidence generation.
    </t>
      </section>

      <section anchor="source-consistency">
    <name>Source Consistency Analysis</name>
    <t>
      The core analytical claim of PPPP is source consistency: whether
      the evidence chain reflects a single coherent generative process
      throughout a document's lifecycle. The framework does not classify
      content as human-written or AI-generated. It detects transitions
      in the character of the generative process and maps them as
      source consistency events.
    </t>
    <t>
      Source consistency is evaluated across the checkpoint chain by
      measuring behavioral characteristics at each checkpoint and
      analyzing their coherence over time. Characteristics include
      edit operation type distribution (ratio of insertions, deletions,
      revisions, structural edits, and navigation events), timing
      patterns relative to content complexity, revision density, and
      temporal evolution of behavioral metrics across the session.
    </t>
    <t>
      The following source consistency transition patterns are defined
      as informational guidance for Verifier implementers:
    </t>
    <table>
      <thead>
        <tr>
          <th>Pattern</th>
          <th>Signature</th>
          <th>Interpretation</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>Consistent</td>
          <td>All checkpoints conform</td>
          <td>Single source, stable process throughout</td>
        </tr>
        <tr>
          <td>Sudden transition</td>
          <td>Conforming then non-conforming</td>
          <td>Late-stage process change or handoff</td>
        </tr>
        <tr>
          <td>Gradual drift</td>
          <td>Conformity degrades progressively</td>
          <td>Increasing process assistance over time</td>
        </tr>
        <tr>
          <td>Intermittent</td>
          <td>Alternating conformity</td>
          <td>Hybrid workflow with multiple sources</td>
        </tr>
        <tr>
          <td>Bookend</td>
          <td>Non-conforming opening and closing</td>
          <td>Different process for introduction and conclusion</td>
        </tr>
      </tbody>
    </table>
    <t>
      These patterns are not normative verification gates. The Verifier
      records the pattern; the Relying Party decides whether the pattern
      is acceptable for their use case. A hybrid workflow may be
      entirely appropriate for some domains and unacceptable in others.
    </t>
      </section>

      <section anchor="decision-history">
    <name>Decision History</name>
    <t>
      Every edit operation in the evidence chain - every insertion,
      deletion, revision, and structural edit - represents a creative
      decision. The sequence of these decisions constitutes the authoring
      process. PPPP captures this decision history as the primary
      evidence artifact.
    </t>
    <t>
      Edit operations are classified by type without recording content:
    </t>
    <dl>
      <dt>Composition:</dt>
      <dd>New text creation - insertions that extend the document.</dd>
      <dt>Revision:</dt>
      <dd>Modification of existing text - deletions followed by
      insertions at the same location, select-and-replace operations.</dd>
      <dt>Structural:</dt>
      <dd>Document reorganization - cut and paste, section reordering,
      large-scale moves.</dd>
      <dt>Navigation:</dt>
      <dd>Cursor movement without content change - reading, reviewing,
      positioning for subsequent edits.</dd>
    </dl>
    <t>
      The distribution and sequencing of these operation types over the
      evidence chain is itself a fingerprint of the authoring process.
      Composition produces varied operation sequences with revisions,
      cursor movements, and structural edits interspersed among insertions.
      Transcription produces predominantly monotonic insertion streams
      with occasional single-character corrections. The evidence chain
      records these patterns without judging them.
    </t>
      </section>

      <section anchor="document-classification">
    <name>Privacy-Preserving Document Classification</name>
    <t>
      Source consistency is evaluated against domain-appropriate
      expectations. A short essay legitimately written front-to-back has
      different expected characteristics than a novel written over months
      with non-linear revision. The document profile is derived from
      behavioral signals without accessing content:
    </t>
    <dl>
      <dt>Sentence length distribution:</dt>
      <dd>Character count between sentence-boundary keystrokes (period,
      space, shift sequences).</dd>
      <dt>Paragraph rhythm:</dt>
      <dd>Frequency and regularity of paragraph-break operations.</dd>
      <dt>Vocabulary complexity proxy:</dt>
      <dd>Word length distribution derived from character counts between
      space keystrokes.</dd>
      <dt>Revision density:</dt>
      <dd>Edit operations per checkpoint, ratio of deletions to
      insertions.</dd>
      <dt>Structural edit frequency:</dt>
      <dd>Cut/paste operations, cursor movements beyond local context,
      select-and-replace events.</dd>
    </dl>
    <t>
      The Attesting Environment computes this classification locally
      and includes it as a document-profile field in the Evidence Packet.
      The author MAY additionally declare a document type. When both
      behavioral classification and author declaration are present, the
      Verifier can assess their consistency - divergence between declared
      type and observed behavioral profile is itself a signal that the
      Relying Party may evaluate.
    </t>
      </section>

      <section anchor="input-trust-boundary">
    <name>Input Event Trust Boundary</name>
    <t>
      The Attesting Environment captures input timing at the OS HID
      event layer. This establishes the trust boundary for behavioral
      entropy collection. The trust boundary differs by assurance tier:
    </t>
    <table>
      <thead>
        <tr>
          <th>Tier</th>
          <th>Input Trust Boundary</th>
          <th>Injection Defense</th>
          <th>Residual Risk</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>T1-T2</td>
          <td>OS HID event layer</td>
          <td>VDF cost asymmetry, chain HMAC, content binding</td>
          <td>Privileged software injection</td>
        </tr>
        <tr>
          <td>T3</td>
          <td>OS HID + TPM signing</td>
          <td>Above + hardware-bound key, platform measurement</td>
          <td>Injection without boot chain alteration</td>
        </tr>
        <tr>
          <td>T4</td>
          <td>TEE interrupt capture</td>
          <td>Above + pre-OS event capture</td>
          <td>Enclave compromise</td>
        </tr>
      </tbody>
    </table>
    <t>
      At T1 and T2, the adversary model assumes the OS input stack is
      not compromised. Synthetic event injection by a privileged attacker
      is not prevented by the protocol but is made economically costly
      by VDF-jitter entanglement and content binding. At T3, TPM-bound
      signing constrains evidence to specific hardware without protecting
      the input path. At T4, TEE-based capture moves the trust boundary
      below the OS, requiring enclave compromise for input injection.
    </t>
    <t>
      Evidence metadata includes the input transport class (USB HID,
      built-in keyboard, Bluetooth Classic, BLE) so that Verifiers
      can adjust confidence based on the transport's timing fidelity.
      Bluetooth connections introduce variable latency (5-30ms) that
      degrades behavioral signal quality; this is reflected in reduced
      confidence scores rather than evidence rejection.
    </t>
      </section>
      <section anchor="two-formats">
    <name>Two Complementary Formats</name>

    <t>
      Two file formats that serve distinct roles in the attestation workflow defined by the RATS architecture are delineated by the witnessd protocol, each encoded using CBOR per the CDDL schemas in the appendices, with registered semantic tags for type identification that enable parsers to determine the packet type by examining the leading tag value.
    </t>

    <section anchor="pop-format">
      <name>Evidence Packet (.pop)</name>

      <t>
        The primary Evidence artifact produced by the Attester in the RATS architecture is the .pop (Proof of Process) file, containing all cryptographic proofs including SHA-256 hash chains, HMAC bindings, VDF outputs, and behavioral evidence accumulated during document authorship, encoded using CBOR with the PPPP tag (1347571280) and structured according to the evidence-packet type in the CDDL schema. The authoritative record of the authoring process is constituted by the Evidence packet, which may be submitted to a Verifier for appraisal per the RATS workflow, archived alongside the document for future verification using only the cryptographic primitives (SHA-256, HMAC, VDF) without access to external services, or shared with Relying Parties who perform their own verification using the CDDL schema and verification procedures defined in this specification. Larger file sizes than the .war file are typical for .pop files because complete segment-based Merkle trees with SHA-256 linkage, full VDF proofs for each inter-segment interval, and behavioral evidence including jitter histograms and entropy commitments are contained within them.
      </t>
    </section>

    <section anchor="war-format">
      <name>Attestation Result (.war)</name>

      <t>
        The Attestation Result produced by a Verifier after appraising an Evidence packet per the RATS architecture is the .war (Writers Authenticity Report) file, which serves as a portable verification certificate signed with COSE that may be distributed independently of the original Evidence, encoded using CBOR with the WAR tag (1463894560) and conforming to the EAT profile defined in this specification. Distribution alongside published documents is the intended use of the Attestation Result, which affords a COSE signed verdict from a trusted Verifier (the forensic-assessment enumeration value), a summary of verified claims derived from SHA-256 hash chain verification and VDF recomputation without including the full evidence, a confidence score in the range [0.0, 1.0] for Relying Party decision-making incorporating factors such as entropy sufficiency and calibration attestation presence, and caveats documenting verification limitations such as missing hardware attestation via TPM <xref target="TPM2.0"/> or pending external anchor confirmations from RFC 3161 timestamps. The .war file may be trusted by Relying Parties based on the Verifier's reputation and the COSE signature validation, or the original .pop file may be requested for independent verification using the CDDL schema and cryptographic primitives (SHA-256, HMAC, VDF) defined in this specification. This flexibility makes possible a range of trust models within the RATS framework, from fully delegated verification where Relying Parties trust the Verifier's EAT claims, to adversarial multi-verifier scenarios where multiple independent Verifiers appraise the same Evidence.
      </t>
    </section>

    <section anchor="format-relationship">
      <name>Format Relationship</name>

      <t>
        Linkage between the two CBOR encoded formats is established by the reference-packet-id field in the Attestation Result, which matches the packet-id of the appraised Evidence packet, with both identifiers being UUIDs per RFC 9562 <xref target="RFC9562"/> to ensure global uniqueness across all Evidence packets ever produced. The reference-packet-id is included in the COSE signed payload of the Attestation Result, ensuring that any attempt to modify the binding would invalidate the Verifier's signature. Unambiguous binding of each Attestation Result to a specific Evidence packet is ensured by this construction, preventing substitution attacks wherein an Attestation Result signed with COSE might be fraudulently associated with a different Evidence packet, a property that is critical for the RATS trust model where Relying Parties may receive Attestation Results from Verifiers they trust without access to the original Evidence. The UUID format provides 122 bits of entropy when using random UUIDs (version 4), making collision probability negligible even across billions of Evidence packets.
      </t>
    </section>
      </section>

      <section anchor="evidence-packet-structure">
    <name>Evidence Packet Structure</name>

    <t>
      The complete attestation evidence produced by the Attester in the RATS architecture is contained in the evidence-packet structure, which encapsulates all cryptographic proofs including SHA-256 hash chains, HMAC bindings, and VDF outputs, as well as behavioral evidence captured during the authoring process. A normative CDDL definition is afforded in the schema appendix with complete type definitions and constraints; in this section, the semantic meaning of each component is described to guide implementers in constructing and parsing CBOR encoded Evidence packets. The structure employs CBOR encoding throughout with integer keys in the range 1-99 reserved for core protocol fields to minimize encoding size, while string keys are permitted for vendor extensions that extend the base CDDL schema.
    </t>

    <artwork type="cddl"><![CDATA[
evidence-packet = #6.1347571280({
        1 => uint,                      ; version (1)
        2 => vdf-structure,             ; VDF
        3 => jitter-seal-structure ; Mandatory in v1.1+,     ; Jitter Seal
        4 => content-hash-tree,         ; Merkle for segments
        5 => correlation-proof,         ; Spearman Correlation
        6 => error-topology,            ; Fractal Error Pattern
        7 => hardware-attestation,       ; Hardware Assurance Binding
        8 => process-metrics,             ; Raw Process Measurements


        * tstr => any,                  ; extensions
})

vdf-structure = {
        1 => bstr,                      ; input: H(DST_CHAIN || content || jitter_seal)
        2 => bstr,                      ; output
        3 => uint64,                    ; iterations
        4 => [* uint64],                ; rdtsc_checkpoints (Continuous calib)
        5 => bstr,                      ; entropic_pulse: HMAC(SK, T ^ E)



}

jitter-seal-structure = {
        1 => tstr,                      ; lang (e.g., "en-US")
        2 => bstr,                      ; bucket_commitment (ZK-Private)
        5 => int .within -100..100,     ; pink_noise_slope_decibits (-10.0..10.0)

        3 => uint,                      ; entropy_millibits
}

content-hash-tree = {
        1 => bstr,                      ; root
        2 => uint16 .ge 20,             ; segment_count
}

correlation-proof = {
        1 => int16 .within -1000..1000, ; rho (Scaled: -1000..1000)
        2 => 700,                       ; threshold (0.7 * 1000)
}

process-metrics = {
        1 => ratio-millibits,           ; linearity-score
        2 => ratio-millibits,           ; structural-edit-ratio
        3 => int,                       ; hesitation-phase-offset (signed millibits)
        4 => ratio-millibits,           ; revision-clustering
        5 => ratio-millibits,           ; fatigue-slope
        6 => uint,                      ; checkpoint-count
        7 => uint,                      ; total-duration-ms
        ? 8 => [+ ratio-millibits],     ; per-checkpoint-conformity-scores
}
]]></artwork>

    <section anchor="required-fields">
      <name>Required Fields</name>

      <t>
        The required fields in the evidence-packet structure provide the essential metadata and cryptographic content needed for verification per the RATS architecture, with each field encoded according to the CDDL schema in the appendix. The version field (key 1) indicates the schema version number, currently 1, and implementations MUST reject packets with unrecognized major versions to ensure forward compatibility with future revisions of this CBOR schema. The profile field (key 2) contains the EAT profile URI (https://example.com/rats/eat/profile/pop/1.0) that identifies this specification, with IANA registration to be requested upon working group adoption as detailed in <xref target="iana-considerations"/>. The packet-id field (key 3) is a UUID per RFC 9562 <xref target="RFC9562"/> uniquely identifying this Evidence packet, generated by the Attester at packet creation time using a cryptographically secure random source. The created field (key 4) is a timestamp indicating when this packet was finalized, encoded using CBOR tag 1 (epoch-based date/time) per RFC 3339 <xref target="RFC3339"/> conventions; note that this timestamp is informational and not cryptographically protected, with temporal ordering established instead by VDF causality. The document field (key 5) contains the document-ref structure binding the Evidence to the documented artifact via SHA-256 content hash as described in <xref target="document-binding"/>. The checkpoints field (key 6) is an segment-based Merkle tree of content hashes forming the evidence chain with SHA-256 hash linkage and VDF proofs as described in <xref target="checkpoint-chain"/>.
      </t>
    </section>

    <section anchor="tiered-sections">
      <name>Tiered Optional Sections</name>

      <t>
        The optional sections (keys 10-17) in the CDDL schema correspond to evidence tiers that determine the strength of assurance provided by the CBOR encoded Evidence packet within the RATS architecture. Higher tiers require additional data collection and produce larger packets, but afford stronger evidence for Verifiers appraising claims. The presence-section (key 10) contributes to Standard tier by adding human presence challenges with timing verified against human reaction time baselines. The forensics-section (key 11) and keystroke-section (key 12) and hardware-section (key 13) are REQUIRED for Enhanced tier by adding edit topology analysis, AI indicator scores, and detailed jitter samples with entropy commitments computed using SHA-256 and bound via HMAC. The hardware-section (key 13) is REQUIRED for Enhanced and Maximum tiers by adding TPM 2.0 or Secure Enclave attestation with device-bound keys. The external-section (key 14) contributes to Maximum tier by adding RFC 3161 timestamps and optional blockchain anchors for absolute time binding. The absence-section (key 15, detailed in <xref target="absence-proofs"/>) contributes to Maximum tier by documenting negative evidence claims with explicit trust requirements. The forgery-cost-section (key 16, detailed in <xref target="forgery-cost-bounds"/>) contributes to Maximum tier by quantifying the computational cost of VDF recomputation and behavioral simulation. The declaration (key 17) may appear at all tiers and contains author attestations signed with COSE.
      </t>
    </section>

    <section anchor="extensibility">
      <name>Extensibility</name>

      <t>
        The evidence-packet structure defined in CDDL supports forward-compatible extensions through string-keyed fields, following the CBOR conventions for extensible maps that allow new fields to be added without breaking existing implementations. Integer keys in the range 1-99 are reserved for this specification and future versions thereof, providing space for additional standardized fields while maintaining compact CBOR encoding. String keys MAY be used for vendor or application-specific extensions that are not part of the core CDDL schema, enabling domain-specific customizations such as additional metadata fields or alternative evidence formats. Verifiers MUST ignore unrecognized string-keyed fields per the RATS extensibility model, allowing Evidence packets with vendor extensions to be verified by any compliant implementation. Verifiers MUST reject packets containing unrecognized integer keys in the reserved range (1-99) to prevent future standardized fields from being misinterpreted by older implementations, ensuring that cryptographic verification using SHA-256 and HMAC is only performed on packets that conform to a known schema version.
      </t>
    </section>
      </section>

      <section anchor="checkpoint-chain">
    <name>Segment Tree Chain</name>

    <t>
      The core evidentiary structure in the RATS profile defined by this specification is constituted by the segment chain, which forms the backbone of the Evidence packet's cryptographic guarantees. Each checkpoint represents a witnessed document state at a specific point in the authoring process, cryptographically linked to its predecessor via SHA-256 hashes that create an immutable sequence. This chain construction, wherein each element commits to its predecessor through the prev-hash field, makes possible tamper-evident sequences that cannot be modified without invalidating all subsequent elements: any change to segment N invalidates the prev-hash in segment N+1, which in turn invalidates segment N+1's hash used in segment N+2, and so on through the entire chain. The VDF proofs entangled with each checkpoint further strengthen this construction by ensuring that recomputation of the chain from any modification point requires sequential time proportional to the number of subsequent checkpoints, with jitter bindings authenticated via HMAC ensuring that behavioral entropy cannot be transplanted between checkpoints, and the chain-mac computed using HMAC-SHA256 preventing checkpoint transplantation between sessions.
    </t>

    <section anchor="checkpoint-structure">
      <name>Checkpoint Structure</name>

      <artwork type="cddl"><![CDATA[
checkpoint = {
        1 => uint,                      ; sequence
        2 => uuid,                      ; checkpoint-id
        3 => pop-timestamp,             ; timestamp
        4 => hash-value,                ; content-hash
        5 => uint,                      ; char-count
        6 => uint,                      ; word-count
        7 => edit-delta,                ; delta
        8 => hash-value,                ; prev-hash
        9 => hash-value,                ; tree-root
        10 => vdf-proof,                ; vdf-proof
        11 => jitter-binding,           ; jitter-binding
        12 => bstr .size 32,            ; chain-mac

        * tstr => any,                  ; extensions
}
]]></artwork>

      <dl>
        <dt>sequence (key 1):</dt>
        <dd>
          Zero-indexed ordinal position in the segment chain.
          MUST be strictly monotonically increasing.
        </dd>

        <dt>checkpoint-id (key 2):</dt>
        <dd>
          UUID uniquely identifying this checkpoint within the packet.
        </dd>

        <dt>timestamp (key 3):</dt>
        <dd>
          Local timestamp when the checkpoint was created. Note that
          local timestamps are untrusted; temporal ordering is
          established by VDF causality.
        </dd>

        <dt>content-hash (key 4):</dt>
        <dd>
          Cryptographic hash of the document content at this checkpoint.
          SHA-256 RECOMMENDED.
        </dd>

        <dt>char-count (key 5), word-count (key 6):</dt>
        <dd>
          Document statistics at this checkpoint. Informational only;
          not cryptographically bound.
        </dd>

        <dt>delta (key 7):</dt>
        <dd>
          Edit delta since previous checkpoint. Contains character
          counts for additions, deletions, and edit operations.
          No content is included.
        </dd>

        <dt>prev-hash (key 8):</dt>
        <dd>
          Hash of the previous checkpoint (tree-root{N-1}).
          For the genesis checkpoint (sequence = 0), this MUST be
          32 zero bytes.
        </dd>

        <dt>tree-root (key 9):</dt>
        <dd>
          Binding hash computed over all checkpoint fields, creating
          the hash chain.
        </dd>

        <dt>vdf-proof (key 10):</dt>
        <dd>
          Verifiable Delay Function proof establishing minimum elapsed
          time. See <xref target="vdf-mechanisms"/>.
        </dd>

        <dt>jitter-binding (key 11):</dt>
        <dd>
          Captured behavioral entropy binding. See
          <xref target="jitter-seal"/>.
        </dd>

        <dt>chain-mac (key 12):</dt>
        <dd>
          HMAC-SHA256 binding the checkpoint to the chain key,
          preventing transplantation of checkpoints between sessions.
        </dd>
      </dl>
    </section>

    <section anchor="hash-chain-construction">
      <name>Hash Chain Construction</name>

      <t>
        A cryptographic hash chain is formed by the segment chain through
        the prev-hash linkage. The construction employs SHA-256 as the
        default hash algorithm, though algorithm agility is supported for future requirements:
      </t>

      <artwork><![CDATA[
+---------------+     +---------------+     +---------------+
| Checkpoint 0  |     | Checkpoint 1  |     | Checkpoint 2  |
|---------------|     |---------------|     |---------------|
| prev-hash:    |<----| prev-hash:    |<----| prev-hash:    |
|   (32 zeros)  |  H  |   H(CP_0)     |  H  |   H(CP_1)     |
| checkpoint-   |---->| checkpoint-   |---->| checkpoint-   |
|   hash: H_0   |     |   hash: H_1   |     |   hash: H_2   |
+---------------+     +---------------+     +---------------+
]]></artwork>

      <t>
        The tree-root is computed as:
      </t>

      <artwork><![CDATA[
tree-root = H(
        "witnessd-checkpoint-v1" ||
        sequence ||
        checkpoint-id ||
        timestamp ||
        content-hash ||
        char-count ||
        word-count ||
        CBOR(delta) ||
        prev-hash ||
        CBOR(vdf-proof) ||
        CBOR(jitter-binding)
)
]]></artwork>

      <t>
        By this construction, any modification to any field
        in any checkpoint is ensured to invalidate all subsequent segment hashes,
        thereby affording tamper-evidence for the entire chain. The cascading nature
        of this invalidation makes selective tampering impractical, as an adversary
        would need to recompute all VDF proofs from the modification point forward.
      </t>
    </section>

    <section anchor="merkle-tree-construction">
      <name>Merkle Tree Construction</name>
      <t>
        The segment chain is further structured as a standard binary Merkle Tree
        (RFC 6962), where each segment hash serves as a leaf. This construction
        enables efficient logarithmic-time inclusion proofs for subsets of segments.
      </t>
      <t>
        External anchors commit to the Merkle root of the entire authoring session,
        thereby affording tamper-evidence for all segments with a single external
        signature. Verifiers MAY validate inclusion of specific segments by verifying
        the Merkle path from the segment leaf to the anchored root.
      </t>
    </section>

    <section anchor="evidence-format-versions">
      <name>Evidence Format Versions</name>

      <t>
        The evidence-packet version field (key 1) indicates the format version
        used for evidence generation. This specification defines two versions
        with distinct security properties:
      </t>

      <dl>
        <dt>Version 1.0 (Legacy Parallel Mode):</dt>
        <dd>
          <t>
            In version 1.0, VDF computation and jitter capture MAY proceed
            in parallel. The jitter commitment is bound to the final
            evidence packet but is not entangled with the VDF input chain.
            This mode permits faster evidence generation but provides weaker
            temporal guarantees: an adversary with pre-computed VDF outputs
            could potentially substitute jitter data without VDF recomputation.
            Version 1.0 evidence SHOULD be treated with reduced confidence
            for temporal claims.
          </t>
        </dd>

        <dt>Version 1.1 (Entangled Computation Mode):</dt>
        <dd>
          <t>
            In version 1.1, jitter capture MUST complete before VDF computation
            begins for each checkpoint. The jitter-binding entropy-commitment
            is incorporated into the VDF input:
          </t>
          <artwork><![CDATA[
    VDF_input{N} = H(
        VDF_output{N-1} ||
        content-hash{N} ||
        jitter-binding{N}.entropy-commitment ||
        sequence{N}
    )
          ]]></artwork>
          <t>
            This entanglement creates a causal dependency: the jitter data
            MUST exist before VDF computation can proceed. An adversary
            attempting to substitute jitter data must recompute the entire
            VDF chain from that point forward, incurring the full temporal
            cost. Version 1.1 is REQUIRED for new implementations and
            provides the security guarantees described throughout this
            specification.
          </t>
        </dd>
      </dl>

      <t>
        Verifiers MUST check the version field and SHOULD apply appropriate
        confidence adjustments:
      </t>

      <ul>
        <li>
          Version 1.1: Full confidence in temporal binding and VDF guarantees.
        </li>
        <li>
          Version 1.0: Reduced confidence; temporal claims limited to
          "evidence existed at some point" rather than "evidence was
          generated over the claimed duration."
        </li>
        <li>
          Unknown versions: Verification SHOULD fail with an error indicating
          unsupported format version.
        </li>
      </ul>

      <t>
        The verifier_nonce field (when present) is incorporated into the
        packet signature regardless of version:
        SIG_k(H3 || verifier_nonce). This provides replay prevention
        independent of VDF entanglement mode.
      </t>
    </section>

      </section>

      <section anchor="document-binding">
    <name>Document Binding</name>

    <t>
      Binding of the Evidence packet to a
      specific document without including the document content is accomplished by the document-ref structure.
      Cryptographic hashes computed using SHA-256 are employed to establish
      this binding, allowing verification that a document corresponds to an Evidence packet
      without revealing the document content to parties who do not already possess it.
    </t>

    <artwork type="cddl"><![CDATA[
document-ref = {
        1 => hash-value,                ; content-hash
        2 => tstr,                      ; filename (optional)
        3 => uint,                      ; byte-length
        4 => uint,                      ; char-count
        ? 5 => hash-salt-mode,          ; salt mode
        ? 6 => bstr,                    ; salt-commitment
}
]]></artwork>

    <section anchor="content-hash-binding">
      <name>Content Hash Binding</name>

      <t>
        The cryptographic hash of the
        final document state is represented by the content-hash (key 1), which is the same value as the
        content-hash in the final checkpoint. Document binding verification is accomplished by computing H(document-content) using SHA-256,
        comparing with document-ref.content-hash, comparing with checkpoints{-1}.content-hash,
        and confirming that all three values match. A mismatch indicates either
        that the document has been modified since the Evidence was generated, or that
        the Evidence packet does not correspond to the presented document.
      </t>
    </section>

    <section anchor="salt-modes">
      <name>Salt Modes for Privacy</name>

      <t>
        Control over how the content hash is
        computed is afforded by the hash-salt-mode field, making possible privacy-preserving verification scenarios
        where global verifiability is not desired:
      </t>

      <table>
        <thead>
          <tr>
            <th>Value</th>
            <th>Mode</th>
            <th>Hash Computation</th>
            <th>Verification</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>0</td>
            <td>unsalted</td>
            <td>H(content)</td>
            <td>Anyone with document can verify</td>
          </tr>
          <tr>
            <td>1</td>
            <td>author-salted</td>
            <td>H(salt || content)</td>
            <td>Author reveals salt to chosen verifiers</td>
          </tr>
        </tbody>
      </table>

      <t>
        For salted modes, the salt is provided by the author out-of-band for
        verification; and confirmation that H(provided-salt) matches
        salt-commitment is performed by Verifiers before using the salt.
        Scenarios where the document binding should not be globally verifiable
        (e.g., unpublished manuscripts, confidential documents) are made
        possible by Author-Salted mode, affording authors control over who may
        verify the binding between their Evidence and their document.
      </t>
    </section>
      </section>

      <section anchor="evidence-tiers">
    <name>Evidence Content Tiers</name>

    <t>
      PPPP Evidence packets are classified by which optional sections are present.
      The content tier describes the depth of behavioral and forensic data collected,
      independent of the attestation assurance level (<xref target="attestation-assurance-levels"/>).
      Content tiers align with the implementation profiles defined in
      <xref target="profile-architecture"/>, which specify the Mandatory-to-Implement
      requirements for each tier.
    </t>

    <t>
      The three content tiers are:
    </t>

    <dl>
      <dt>CORE (Tier Value 1):</dt>
      <dd>
        Checkpoint chain with VDF proofs, SHA-256 content binding, and RFC 3161
        timestamps. Proves temporal ordering and content integrity. See
        <xref target="profile-core-def"/> for MTI requirements.
      </dd>

      <dt>ENHANCED (Tier Value 2):</dt>
      <dd>
        All CORE components plus behavioral entropy (jitter samples), presence
        challenges, and intra-checkpoint correlation. Adds evidence of interactive
        authoring behavior. See <xref target="profile-enhanced-def"/> for MTI requirements.
      </dd>

      <dt>MAXIMUM (Tier Value 3):</dt>
      <dd>
        All ENHANCED components plus error topology analysis, STARK proofs,
        CEE binding, absence proofs, and forgery cost bounds. Provides the
        strongest available evidence for adversarial scenarios. See
        <xref target="profile-maximum-def"/> for MTI requirements.
      </dd>
    </dl>

    <section anchor="tier-selection">
      <name>Tier Selection Guidance</name>

      <t>
        Selection of the minimum tier that meets verification requirements is
        RECOMMENDED for authors. Higher tiers collect more behavioral data and
        create larger Evidence packets, which may raise privacy concerns or
        storage constraints in some deployment scenarios.
      </t>

      <table>
        <thead>
          <tr>
            <th>Content Tier</th>
            <th>Typical Use Cases</th>
            <th>Privacy Impact</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>CORE</td>
            <td>Personal notes, internal docs, low-stakes records</td>
            <td>Minimal</td>
          </tr>
          <tr>
            <td>ENHANCED</td>
            <td>Academic submissions, professional reports, business records</td>
            <td>Moderate</td>
          </tr>
          <tr>
            <td>MAXIMUM</td>
            <td>Litigation support, forensic investigation, regulatory compliance</td>
            <td>Higher</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="orthogonal-dimensions">
      <name>Relationship to Attestation Assurance</name>

      <t>
        Content tier and attestation assurance level (<xref target="attestation-assurance-levels"/>)
        are orthogonal dimensions. An Evidence packet has both:
      </t>

      <ul>
        <li>A content tier (CORE/ENHANCED/MAXIMUM) describing what
        evidence sections are present</li>
        <li>An attestation tier (T1/T2/T3/T4) describing how
        strongly the evidence is hardware-bound</li>
      </ul>

      <t>
        For example, a MAXIMUM content tier packet may be generated with T1
        (software-only) attestation on devices lacking hardware security, while
        a CORE content tier packet may have T4 (hardware-hardened) attestation
        when strong device binding is available but behavioral data collection
        is not desired.
      </t>

      <t>
        Relying Parties SHOULD establish minimum requirements for both dimensions
        based on their risk tolerance and regulatory obligations.
      </t>
    </section>
      </section>

      <section anchor="attestation-assurance-levels">
    <name>Attestation Assurance Levels</name>

    <t>
      Attestation Assurance Levels define the strength of hardware binding
      and cryptographic protection for PPPP Evidence packets. This dimension
      is orthogonal to the content tier (<xref target="evidence-tiers"/>): content
      tier describes what evidence is collected, while attestation tier describes
      how strongly that evidence is bound to hardware trust anchors.
    </t>

    <t>
      The attestation tier system maps to established assurance frameworks
      including NIST SP 800-63B Authenticator Assurance Levels (AAL),
      ISO/IEC 29115 Levels of Assurance (LoA), and Entity Attestation Token
      (EAT) security levels as defined in <xref target="I-D.ietf-rats-eat"/>.
    </t>

    <t>
      Each Evidence packet MUST declare its attestation tier in key 10 of the
      evidence-packet structure, enabling Verifiers to enforce tier-based
      acceptance policies. The attestation tier reflects the actual hardware
      capabilities used during evidence generation, not the maximum capabilities
      available on the device.
    </t>

    <section anchor="tier-t1-software">
      <name>Tier T1: Software-Only</name>

      <t>
        T1 provides baseline evidence generation using pure software
        implementations without hardware security features.
      </t>

      <dl>
        <dt>Attestation Mode:</dt>
        <dd>software</dd>

        <dt>Binding Strength:</dt>
        <dd>none (no hardware binding) or hmac_local (local key only)</dd>

        <dt>NIST AAL Mapping:</dt>
        <dd>AAL1 - Single-factor authentication equivalent</dd>

        <dt>ISO LoA Mapping:</dt>
        <dd>LoA1 - Low confidence in identity</dd>

        <dt>EAT Security Level:</dt>
        <dd>unrestricted (0) or restricted (1)</dd>

        <dt>Security Properties:</dt>
        <dd>
          <ul>
            <li>VDF timing provides temporal ordering</li>
            <li>Hash chains provide tamper evidence</li>
            <li>Jitter entropy provides behavioral binding</li>
            <li>No hardware root of trust</li>
            <li>Keys stored in software (file system)</li>
          </ul>
        </dd>

        <dt>Limitations:</dt>
        <dd>
          <ul>
            <li>DEVICE_BINDING_NOT_VERIFIED - Device identity not cryptographically bound</li>
            <li>KEY_EXTRACTION_POSSIBLE - Signing keys may be extracted by malware</li>
            <li>NO_HARDWARE_ATTESTATION - Cannot prove hardware integrity</li>
          </ul>
        </dd>
      </dl>
    </section>

    <section anchor="tier-t2-attested">
      <name>Tier T2: Attested Software</name>

      <t>
        T2 extends T1 with optional hardware attestation hooks when available.
        The Attesting Environment attempts to use platform security features
        but degrades gracefully when hardware is unavailable.
      </t>

      <dl>
        <dt>Attestation Mode:</dt>
        <dd>attested_software</dd>

        <dt>Binding Strength:</dt>
        <dd>hmac_local or cryptographic (when hardware available)</dd>

        <dt>NIST AAL Mapping:</dt>
        <dd>AAL1-AAL2 - Depending on hardware availability</dd>

        <dt>ISO LoA Mapping:</dt>
        <dd>LoA1-LoA2 - Low to medium confidence</dd>

        <dt>EAT Security Level:</dt>
        <dd>restricted (1) or secure_restricted (2)</dd>

        <dt>Security Properties:</dt>
        <dd>
          <ul>
            <li>All T1 properties</li>
            <li>Hardware attestation when available (opportunistic)</li>
            <li>Platform security APIs used when present</li>
            <li>Keychain/Credential Guard integration on supported platforms</li>
          </ul>
        </dd>

        <dt>Limitations:</dt>
        <dd>
          <ul>
            <li>HARDWARE_OPTIONAL - Hardware features may not be present</li>
            <li>DEGRADED_MODE_POSSIBLE - May fall back to T1 behavior</li>
            <li>VARIABLE_ASSURANCE - Assurance depends on runtime environment</li>
          </ul>
        </dd>
      </dl>
    </section>

    <section anchor="tier-t3-hardware-bound">
      <name>Tier T3: Hardware-Bound</name>

      <t>
        T3 requires hardware security module binding via TPM 2.0
        or platform Secure Enclave. Evidence generation MUST fail if
        hardware attestation is unavailable.
      </t>

      <dl>
        <dt>Attestation Mode:</dt>
        <dd>hardware_bound</dd>

        <dt>Binding Strength:</dt>
        <dd>cryptographic - TPM or Secure Enclave key binding required</dd>

        <dt>NIST AAL Mapping:</dt>
        <dd>AAL3 - Hardware cryptographic authenticator</dd>

        <dt>ISO LoA Mapping:</dt>
        <dd>LoA3 - High confidence in identity</dd>

        <dt>EAT Security Level:</dt>
        <dd>hardware (3)</dd>

        <dt>Security Properties:</dt>
        <dd>
          <ul>
            <li>All T2 properties (non-degraded)</li>
            <li>Hardware-protected signing keys (non-exportable)</li>
            <li>Platform integrity measurement (PCR values)</li>
            <li>Device binding cryptographically verified</li>
            <li>Attestation includes hardware identity</li>
          </ul>
        </dd>

        <dt>Hardware Requirements:</dt>
        <dd>
          <ul>
            <li>TPM 2.0 with attestation capability, OR</li>
            <li>Apple Secure Enclave with attestation, OR</li>
            <li>ARM TrustZone with attestation capability</li>
          </ul>
        </dd>

        <dt>Limitations:</dt>
        <dd>
          <ul>
            <li>NO_PUF_BINDING - Physical unclonable function not required</li>
            <li>FIRMWARE_TRUST_REQUIRED - Relies on hardware vendor firmware</li>
          </ul>
        </dd>
      </dl>
    </section>

    <section anchor="tier-t4-hardware-hardened">
      <name>Tier T4: Hardware-Hardened</name>

      <t>
        T4 represents maximum attestation strength with discrete TPM,
        Physical Unclonable Function (PUF) binding, and enclave execution.
      </t>

      <dl>
        <dt>Attestation Mode:</dt>
        <dd>hardware_hardened</dd>

        <dt>Binding Strength:</dt>
        <dd>physical - PUF-derived key binding with TPM attestation</dd>

        <dt>NIST AAL Mapping:</dt>
        <dd>AAL3+ - Exceeds AAL3 with physical binding</dd>

        <dt>ISO LoA Mapping:</dt>
        <dd>LoA4 - Very high confidence in identity</dd>

        <dt>EAT Security Level:</dt>
        <dd>hardware (3) with enhanced claims</dd>

        <dt>Common Criteria Reference:</dt>
        <dd>EAL4+ evaluation target equivalent</dd>

        <dt>Security Properties:</dt>
        <dd>
          <ul>
            <li>All T3 properties</li>
            <li>PUF-derived entropy binding</li>
            <li>Discrete TPM (not firmware TPM)</li>
            <li>Secure enclave execution for sensitive operations</li>
            <li>Side-channel resistance for timing operations</li>
            <li>Physical tamper evidence</li>
          </ul>
        </dd>

        <dt>Hardware Requirements:</dt>
        <dd>
          <ul>
            <li>Discrete TPM 2.0 (hardware module, not fTPM)</li>
            <li>PUF capability (SRAM PUF or equivalent)</li>
            <li>Secure enclave (SGX, TrustZone, or Secure Enclave)</li>
          </ul>
        </dd>

        <dt>Limitations:</dt>
        <dd>
          <ul>
            <li>LIMITED_DEVICE_SUPPORT - Requires specific hardware</li>
            <li>HIGHER_LATENCY - Additional cryptographic operations</li>
          </ul>
        </dd>
      </dl>
    </section>

    <section anchor="tier-mapping-table">
      <name>Assurance Level Mapping</name>

      <t>
        The following table summarizes the mapping between PPPP Attestation
        Tiers and external assurance frameworks. For use case guidance based
        on content tier, see <xref target="tier-selection"/>.
      </t>

      <table>
        <thead>
          <tr>
            <th>PPPP Tier</th>
            <th>NIST AAL</th>
            <th>ISO LoA</th>
            <th>EAT Level</th>
            <th>Binding Strength</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>T1</td>
            <td>AAL1</td>
            <td>LoA1</td>
            <td>0-1</td>
            <td>Software-only</td>
          </tr>
          <tr>
            <td>T2</td>
            <td>AAL1-2</td>
            <td>LoA1-2</td>
            <td>1-2</td>
            <td>Opportunistic hardware</td>
          </tr>
          <tr>
            <td>T3</td>
            <td>AAL3</td>
            <td>LoA3</td>
            <td>3</td>
            <td>Required TPM/Enclave</td>
          </tr>
          <tr>
            <td>T4</td>
            <td>AAL3+</td>
            <td>LoA4</td>
            <td>3+</td>
            <td>Discrete TPM + PUF</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="tier-rp-guidance">
      <name>Relying Party Guidance</name>

      <t>
        Relying Parties SHOULD establish minimum requirements for both
        attestation tier (this section) and content tier
        (<xref target="tier-selection"/>) based on their risk tolerance
        and regulatory obligations. The following guidance addresses
        attestation tier requirements specifically:
      </t>

      <dl>
        <dt>Accept T1 or higher when:</dt>
        <dd>
          <ul>
            <li>Evidence is for personal reference only</li>
            <li>Author reputation provides sufficient trust</li>
            <li>Consequences of forgery are minimal</li>
            <li>Hardware security is impractical for the user population</li>
          </ul>
        </dd>

        <dt>Require T2 or higher when:</dt>
        <dd>
          <ul>
            <li>Evidence supports business decisions</li>
            <li>Multiple parties rely on the evidence</li>
            <li>Moderate financial or reputational risk exists</li>
            <li>Professional standards apply</li>
          </ul>
        </dd>

        <dt>Require T3 or higher when:</dt>
        <dd>
          <ul>
            <li>Legal proceedings may reference the evidence</li>
            <li>Regulatory compliance requires hardware binding</li>
            <li>Non-repudiation is a business requirement</li>
            <li>High-value intellectual property is at stake</li>
          </ul>
        </dd>

        <dt>Require T4 when:</dt>
        <dd>
          <ul>
            <li>Evidence must withstand adversarial forensic analysis</li>
            <li>Litigation is anticipated or ongoing</li>
            <li>Maximum available assurance is mandated by policy</li>
            <li>Sophisticated adversaries with substantial compute resources are anticipated (note: HSM compromise by nation-states is out of scope per <xref target="out-of-scope-adversaries"/>)</li>
          </ul>
        </dd>
      </dl>

      <t>
        Verifiers MUST include the declared attestation tier in attestation
        results (WAR files), enabling Relying Parties to enforce tier-based
        acceptance policies. Verifiers SHOULD also include any
        attestation-limitations that apply to the Evidence, as these
        document specific security properties that cannot be claimed at
        the declared tier.
      </t>
    </section>

    <section anchor="tier-hardware-unavailable">
      <name>Behavior When Hardware Unavailable</name>

      <t>
        The Attesting Environment behavior when required hardware is
        unavailable depends on the configured tier:
      </t>

      <dl>
        <dt>T1 Configuration:</dt>
        <dd>
          Hardware availability has no effect. Evidence generation proceeds
          using software-only implementation.
        </dd>

        <dt>T2 Configuration:</dt>
        <dd>
          Evidence generation proceeds with available capabilities. The
          attestation-limitations array MUST include HARDWARE_NOT_AVAILABLE
          if hardware attestation was attempted but failed. The actual tier
          achieved MAY be lower than T2 if only software capabilities were
          available.
        </dd>

        <dt>T3 Configuration:</dt>
        <dd>
          Evidence generation MUST fail if TPM or Secure Enclave attestation
          is unavailable. Implementations MUST NOT silently degrade to T2
          or T1. An appropriate error code MUST be returned to the caller.
        </dd>

        <dt>T4 Configuration:</dt>
        <dd>
          Evidence generation MUST fail if discrete TPM, PUF, or enclave
          capability is unavailable. Implementations MUST NOT silently
          degrade to lower tiers.
        </dd>
      </dl>

      <t>
        Implementations MUST accurately report the tier achieved, not the
        tier configured. A T2-configured implementation that lacks hardware
        MUST report T1 in the evidence packet, not T2.
      </t>
    </section>
      </section>

      <section anchor="profile-architecture">
    <name>Profile Architecture</name>

    <t>
      The PPPP specification defines three implementation profiles that establish
      Mandatory-to-Implement (MTI) requirements for interoperability. Each profile
      represents a coherent set of features that implementations MUST support to
      claim conformance at that level. Profile declarations are carried in
      key 9 of the evidence-packet structure as specified in the companion CDDL
      schema <xref target="I-D.condrey-rats-pop-schema"/>.
    </t>

    <t>
      Implementation profiles define what features an implementation MUST support.
      This is related to, but distinct from:
    </t>

    <ul>
      <li>Evidence Content Tiers (<xref target="evidence-tiers"/>): describe
      what optional sections are present in a given Evidence packet</li>
      <li>Attestation Assurance Levels (<xref target="attestation-assurance-levels"/>):
      describe hardware binding strength for a given Evidence packet</li>
    </ul>

    <t>
      A CORE profile implementation may generate packets at any content tier
      (by including optional features), while an ENHANCED profile implementation
      MUST be capable of generating ENHANCED content tier packets.
    </t>

    <section anchor="profile-identifiers">
      <name>Profile Identifiers</name>

      <t>
        Each profile is identified by a URN in the IETF RATS namespace with the
        following format:
      </t>

      <artwork><![CDATA[
urn:ietf:params:rats:pop:profile:<name>
]]></artwork>

      <t>
        The registered profile URNs are:
      </t>

      <table>
        <thead>
          <tr>
            <th>Profile</th>
            <th>Tier Value</th>
            <th>URN</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>CORE</td>
            <td>1</td>
            <td>urn:ietf:params:rats:pop:profile:core</td>
          </tr>
          <tr>
            <td>ENHANCED</td>
            <td>2</td>
            <td>urn:ietf:params:rats:pop:profile:enhanced</td>
          </tr>
          <tr>
            <td>MAXIMUM</td>
            <td>3</td>
            <td>urn:ietf:params:rats:pop:profile:maximum</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="profile-core-def">
      <name>CORE Profile</name>

      <t>
        The CORE profile establishes the minimum viable implementation for PPPP
        interoperability. All implementations claiming PPPP conformance MUST
        implement at least the CORE profile. The security guarantees provided
        by CORE are:
      </t>

      <ul>
        <li>
          <t>Temporal ordering: VDF proofs establish minimum elapsed time between
          checkpoints with cryptographic assurance.</t>
        </li>
        <li>
          <t>Content integrity: SHA-256 hash binding
          ensures tamper-evidence for the attested document.</t>
        </li>
        <li>
          <t>External anchoring: RFC 3161 timestamps
          provide independent temporal witnesses from trusted third parties.</t>
        </li>
      </ul>

      <t>
        The following features are Mandatory-to-Implement for CORE:
      </t>

      <table>
        <thead>
          <tr>
            <th>Feature ID</th>
            <th>Feature Name</th>
            <th>Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>1</td>
            <td>vdf-iterated-sha256</td>
            <td>Iterated SHA-256 VDF construction per <xref target="vdf-iterated-hash"/></td>
          </tr>
          <tr>
            <td>2</td>
            <td>content-binding</td>
            <td>SHA-256 content hash binding per <xref target="content-hash-binding"/></td>
          </tr>
          <tr>
            <td>3</td>
            <td>external-anchor-rfc3161</td>
            <td>RFC 3161 timestamp anchor support</td>
          </tr>
          <tr>
            <td>4</td>
            <td>checkpoint-chain</td>
            <td>Hash-linked checkpoint chain per <xref target="checkpoint-chain"/></td>
          </tr>
          <tr>
            <td>5</td>
            <td>cose-sign1</td>
            <td>COSE_Sign1 packet signature</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="profile-enhanced-def">
      <name>ENHANCED Profile</name>

      <t>
        The ENHANCED profile adds behavioral entropy capture and correlation
        analysis to the CORE features. Implementations claiming ENHANCED
        conformance MUST implement all CORE features plus the ENHANCED MTI
        features. The additional security guarantees provided by ENHANCED are:
      </t>

      <ul>
        <li>
          <t>Behavioral entropy: Jitter-based entropy capture provides evidence
          of interactive authoring behavior in the creation process.</t>
        </li>
        <li>
          <t>Intra-checkpoint correlation (C_intra): Statistical correlation
          between timing patterns and content evolution within checkpoints.</t>
        </li>
        <li>
          <t>Cognitive load indicators: Metrics derived from typing patterns
          that reflect human cognitive processing characteristics.</t>
        </li>
      </ul>

      <t>
        The following features are Mandatory-to-Implement for ENHANCED
        (in addition to all CORE features):
      </t>

      <table>
        <thead>
          <tr>
            <th>Feature ID</th>
            <th>Feature Name</th>
            <th>Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>50</td>
            <td>behavioral-entropy</td>
            <td>Jitter-based behavioral entropy per <xref target="jitter-seal"/></td>
          </tr>
          <tr>
            <td>51</td>
            <td>c-intra-correlation</td>
            <td>Intra-checkpoint Spearman correlation</td>
          </tr>
          <tr>
            <td>52</td>
            <td>cognitive-load</td>
            <td>Cognitive load indicators derived from timing</td>
          </tr>
          <tr>
            <td>53</td>
            <td>presence-challenges</td>
            <td>Human presence verification challenges</td>
          </tr>
          <tr>
            <td>54</td>
            <td>keystroke-jitter</td>
            <td>Keystroke timing jitter capture</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="profile-maximum-def">
      <name>MAXIMUM Profile</name>

      <t>
        The MAXIMUM profile provides the strongest available evidence through
        comprehensive behavioral analysis, cryptographic proofs, and hardware
        attestation. Implementations claiming MAXIMUM conformance MUST implement
        all CORE and ENHANCED features plus the MAXIMUM MTI features. The
        additional security guarantees provided by MAXIMUM are:
      </t>

      <ul>
        <li>
          <t>Error topology analysis: Fractal pattern analysis of editing errors
          that distinguishes human error patterns from automated generation.</t>
        </li>
        <li>
          <t>STARK proofs: Succinct transparent arguments of knowledge for
          efficient verification of complex evidence structures.</t>
        </li>
        <li>
          <t>Cryptographic Entropy Entanglement (CEE): VDF outputs entangled
          with behavioral entropy to prevent backdating attacks.</t>
        </li>
        <li>
          <t>Hardware attestation: TPM 2.0 or Secure
          Enclave binding for device-level trust anchoring.</t>
        </li>
      </ul>

      <t>
        The following features are Mandatory-to-Implement for MAXIMUM
        (in addition to all CORE and ENHANCED features):
      </t>

      <table>
        <thead>
          <tr>
            <th>Feature ID</th>
            <th>Feature Name</th>
            <th>Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>100</td>
            <td>error-topology</td>
            <td>Fractal error pattern analysis per <xref target="error-topology"/></td>
          </tr>
          <tr>
            <td>101</td>
            <td>stark-proofs</td>
            <td>STARK-based verification proofs</td>
          </tr>
          <tr>
            <td>102</td>
            <td>cee-binding</td>
            <td>Cryptographic Entropy Entanglement per <xref target="jitter-vdf-entanglement"/></td>
          </tr>
          <tr>
            <td>103</td>
            <td>absence-proofs</td>
            <td>Negative evidence claims per <xref target="absence-proofs"/></td>
          </tr>
          <tr>
            <td>104</td>
            <td>forgery-cost-bounds</td>
            <td>Economic attack cost analysis per <xref target="forgery-cost-bounds"/></td>
          </tr>
          <tr>
            <td>105</td>
            <td>hardware-attestation</td>
            <td>TPM/Secure Enclave binding</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="profile-declaration-structure">
      <name>Profile Declaration Structure</name>

      <t>
        Evidence packets MAY include a profile declaration in key 9 of the
        evidence-packet structure. The declaration specifies the profile tier
        and URI, with optional indication of features enabled beyond the MTI
        requirements. The CDDL <xref target="RFC8610"/> structure is:
      </t>

      <artwork type="cddl"><![CDATA[
profile-declaration = {
        1 => profile-tier,              ; tier (1=core, 2=enhanced, 3=maximum)
        2 => profile-uri,               ; URN identifier
        ? 3 => [+ feature-id],          ; enabled-features (beyond MTI)
        ? 4 => tstr,                    ; implementation-id
}

profile-tier = &(
        core: 1,
        enhanced: 2,
        maximum: 3,
)

profile-uri = tstr .regexp "urn:ietf:params:rats:pop:profile:(core|enhanced|maximum)"
]]></artwork>

      <t>
        The enabled-features array (key 3) lists feature IDs that are implemented
        beyond the MTI requirements for the declared tier. This allows CORE
        implementations to indicate support for specific ENHANCED or MAXIMUM
        features without claiming full conformance to those tiers. The
        implementation-id (key 4) is an opaque string identifying the software
        that generated the Evidence packet, useful for debugging and ecosystem
        analysis but carrying no normative weight.
      </t>
    </section>

    <section anchor="profile-verification-behavior">
      <name>Verification Behavior</name>

      <t>
        Verifiers MUST handle Evidence packets according to the following rules
        based on the presence or absence of profile declarations:
      </t>

      <section anchor="profile-present">
        <name>Profile Declaration Present</name>

        <t>
          When key 9 (profile-declaration) is present in the evidence-packet,
          Verifiers MUST:
        </t>

        <ol>
          <li>
            <t>Validate that the profile-uri corresponds to a known profile.</t>
          </li>
          <li>
            <t>Verify that all MTI features for the declared tier are present
            in the Evidence packet with valid data.</t>
          </li>
          <li>
            <t>If MTI validation fails, the Verifier MUST reject the packet
            with error code PROFILE_INCOMPLETE.</t>
          </li>
          <li>
            <t>If MTI validation succeeds, the Verifier MAY rely on the
            security guarantees associated with the declared profile tier.</t>
          </li>
        </ol>
      </section>

      <section anchor="profile-absent">
        <name>Profile Declaration Absent</name>

        <t>
          When key 9 is absent from the evidence-packet, Verifiers MUST apply
          defensive processing:
        </t>

        <ol>
          <li>
            <t>The Verifier MUST NOT assume any specific profile tier.</t>
          </li>
          <li>
            <t>The Verifier SHOULD attempt to infer the effective tier by
            examining which structures are present in the packet.</t>
          </li>
          <li>
            <t>The inferred tier MUST be reported in the attestation-result
            with caveat PROFILE_INFERRED indicating that the profile was not
            explicitly declared by the Attester.</t>
          </li>
          <li>
            <t>Relying Parties SHOULD treat inferred profiles with lower
            confidence than explicitly declared profiles.</t>
          </li>
        </ol>
      </section>

      <section anchor="profile-unknown">
        <name>Unknown Profile URI</name>

        <t>
          When the profile-uri value is not recognized by the Verifier:
        </t>

        <ol>
          <li>
            <t>The Verifier MUST NOT reject the packet solely because the
            profile URI is unknown.</t>
          </li>
          <li>
            <t>The Verifier SHOULD process the packet as if no profile were
            declared, applying the inference rules from
            <xref target="profile-absent"/>.</t>
          </li>
          <li>
            <t>The attestation-result MUST include caveat PROFILE_UNKNOWN
            with the unrecognized URI value.</t>
          </li>
        </ol>

        <t>
          This forward-compatibility behavior allows future profile extensions
          without breaking existing Verifiers while ensuring that Relying
          Parties are informed when unfamiliar profiles are encountered.
        </t>
      </section>
    </section>

    <section anchor="profile-mti-summary">
      <name>MTI Summary</name>

      <t>
        The following table summarizes the Mandatory-to-Implement requirements
        across all profiles. An "M" indicates the feature is mandatory for that
        profile tier; an "O" indicates the feature is optional but MAY be
        declared in the enabled-features array.
      </t>

      <table>
        <thead>
          <tr>
            <th>Feature ID</th>
            <th>Feature Name</th>
            <th>CORE</th>
            <th>ENHANCED</th>
            <th>MAXIMUM</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>1</td>
            <td>vdf-iterated-sha256</td>
            <td>M</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>2</td>
            <td>content-binding</td>
            <td>M</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>3</td>
            <td>external-anchor-rfc3161</td>
            <td>M</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>4</td>
            <td>checkpoint-chain</td>
            <td>M</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>5</td>
            <td>cose-sign1</td>
            <td>M</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>50</td>
            <td>behavioral-entropy</td>
            <td>O</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>51</td>
            <td>c-intra-correlation</td>
            <td>O</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>52</td>
            <td>cognitive-load</td>
            <td>O</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>53</td>
            <td>presence-challenges</td>
            <td>O</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>54</td>
            <td>keystroke-jitter</td>
            <td>O</td>
            <td>M</td>
            <td>M</td>
          </tr>
          <tr>
            <td>100</td>
            <td>error-topology</td>
            <td>O</td>
            <td>O</td>
            <td>M</td>
          </tr>
          <tr>
            <td>101</td>
            <td>stark-proofs</td>
            <td>O</td>
            <td>O</td>
            <td>M</td>
          </tr>
          <tr>
            <td>102</td>
            <td>cee-binding</td>
            <td>O</td>
            <td>O</td>
            <td>M</td>
          </tr>
          <tr>
            <td>103</td>
            <td>absence-proofs</td>
            <td>O</td>
            <td>O</td>
            <td>M</td>
          </tr>
          <tr>
            <td>104</td>
            <td>forgery-cost-bounds</td>
            <td>O</td>
            <td>O</td>
            <td>M</td>
          </tr>
          <tr>
            <td>105</td>
            <td>hardware-attestation</td>
            <td>O</td>
            <td>O</td>
            <td>M</td>
          </tr>
        </tbody>
      </table>
    </section>
      </section>

      <section anchor="attestation-result-structure">
    <name>Attestation Result Structure</name>

    <t>
      The attestation-result structure contains the Verifier's
      assessment of an Evidence packet. It implements a witnessd-specific
      profile of EAR (Entity Attestation Results) as defined in
      <xref target="I-D.ietf-rats-ear"/>.
    </t>

    <artwork type="cddl"><![CDATA[
attestation-result = {
        1 => uint,                      ; version
        2 => uuid,                      ; reference-packet-id
        3 => pop-timestamp,             ; verified-at
        4 => forensic-assessment,       ; verdict
        5 => confidence-millibits,      ; confidence (0-1000 = 0.0-1.0)
        6 => [+ result-claim],          ; verified-claims
        7 => cose-signature,            ; verifier-signature
        ? 8 => tstr,                    ; verifier-identity
        ? 9 => verifier-metadata,       ; additional info
        ? 10 => [+ tstr],               ; caveats
        ? 11 => source-consistency-analysis, ; Verifier's interpretation
        * tstr => any,                  ; extensions
}

source-consistency-analysis = {
        1 => tstr,                      ; detected-pattern
        2 => ratio-millibits,           ; aggregate-consistency (0-1000)
        ? 3 => [+ uint],               ; deviation-checkpoint-indices
        ? 4 => tstr,                   ; verifier-policy-id
}

; Fixed-point type definitions (see schema spec for details)
confidence-millibits = uint .le 1000   ; 0-1000 representing 0.000-1.000
ratio-millibits = uint .le 1000        ; generic 0.0-1.0 ratio
entropy-decibits = uint .le 640        ; 0-640 representing 0.0-64.0 bits
cost-microdollars = uint               ; USD * 1,000,000
duration-ms = uint                     ; milliseconds
p-value-centibits = uint .le 10000     ; p-values with 4 decimal precision
]]></artwork>

    <section anchor="verdict-field">
      <name>Verdict Field</name>

      <t>
        The verdict (key 4) is the Verifier's overall forensic
        assessment using the forensic-assessment enumeration:
      </t>

      <table>
        <thead>
          <tr>
            <th>Value</th>
            <th>Assessment</th>
            <th>Meaning</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>0</td>
            <td>not-assessed</td>
            <td>Verification incomplete or not attempted</td>
          </tr>
          <tr>
            <td>1</td>
            <td>source-consistent</td>
            <td>Evidence chain shows consistent generative process throughout</td>
          </tr>
          <tr>
            <td>2</td>
            <td>source-consistent-partial</td>
            <td>Evidence chain shows consistency with minor deviations</td>
          </tr>
          <tr>
            <td>3</td>
            <td>inconclusive</td>
            <td>Insufficient evidence to characterize source consistency</td>
          </tr>
          <tr>
            <td>4</td>
            <td>source-transition-detected</td>
            <td>Evidence chain contains measurable process transitions</td>
          </tr>
          <tr>
            <td>5</td>
            <td>source-inconsistent</td>
            <td>Evidence chain shows significant process inconsistency</td>
          </tr>
        </tbody>
      </table>

      <t>
        IMPORTANT: The verdict characterizes the consistency of the
        evidence chain, not the identity or nature of the author. A
        verdict of "source-transition-detected" means the behavioral
        metrics changed measurably at specific checkpoints. What caused
        that change - a tool switch, a collaborator, fatigue, or
        something else - is not determined by the Verifier. The
        Relying Party applies domain-specific policy to decide
        whether the observed pattern is acceptable.
      </t>
    </section>

    <section anchor="confidence-score">
      <name>Confidence Score</name>

      <t>
        The confidence-score (key 5) is an unsigned integer in millibits
        (0-1000) representing the Verifier's confidence in the verdict.
        Divide by 1000 to convert to the 0.0-1.0 range:
      </t>

      <ul>
        <li>0 - 300: Low confidence (limited evidence)</li>
        <li>300 - 700: Moderate confidence (typical evidence)</li>
        <li>700 - 1000: High confidence (strong evidence)</li>
      </ul>

      <t>
        The confidence score incorporates:
      </t>

      <ul>
        <li>Evidence tier (higher tiers increase confidence ceiling)</li>
        <li>Segment chain completeness</li>
        <li>Entropy sufficiency in jitter bindings</li>
        <li>VDF calibration attestation presence</li>
        <li>External anchor confirmations</li>
      </ul>
    </section>

    <section anchor="verified-claims">
      <name>Verified Claims</name>

      <t>
        The verified-claims array (key 6) contains individual claim
        verification results:
      </t>

      <artwork type="cddl"><![CDATA[
result-claim = {
        1 => uint,                      ; claim-type
        2 => bool,                      ; verified
        ? 3 => tstr,                    ; detail
        ? 4 => confidence-level,        ; claim-confidence
}
]]></artwork>

      <t>
        The claim-type values correspond to the absence-claim-type
        enumeration, enabling direct mapping between Evidence claims
        and Attestation Result verification outcomes.
      </t>
    </section>

    <section anchor="verifier-signature">
      <name>Verifier Signature</name>

      <t>
        The verifier-signature (key 7) is a COSE_Sign1 signature
        over the Attestation Result payload (fields 1-6 plus any
        optional fields 8-10). This signature:
      </t>

      <ul>
        <li>
          Authenticates the Verifier identity
        </li>
        <li>
          Ensures integrity of the Attestation Result
        </li>
        <li>
          Enables Relying Parties to verify the result came from
          a trusted Verifier
        </li>
      </ul>
    </section>

    <section anchor="caveats">
      <name>Caveats</name>

      <t>
        The caveats array (key 10) documents limitations and warnings
        that Relying Parties should consider:
      </t>

      <ul>
        <li>
          "No hardware attestation available"
        </li>
        <li>
          "External anchors pending confirmation"
        </li>
        <li>
          "Jitter entropy below recommended threshold"
        </li>
        <li>
          "Author declares AI tool usage"
        </li>
      </ul>

      <t>
        Verifiers MUST include appropriate caveats when the Evidence
        has known limitations. Relying Parties SHOULD review caveats
        before making trust decisions.
      </t>
    </section>
      </section>

      <section anchor="cbor-encoding">
    <name>CBOR Encoding</name>

    <t>
      Both Evidence packets and Attestation Results use CBOR
      (Concise Binary Object Representation) encoding per RFC 8949.
    </t>

    <section anchor="cbor-tags">
      <name>Semantic Tags</name>

      <t>
        Top-level structures use semantic tags for type identification:
      </t>

      <table>
        <thead>
          <tr>
            <th>Tag</th>
            <th>Hex</th>
            <th>ASCII</th>
            <th>Structure</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>1347571280</td>
            <td>0x50505020</td>
            <td>"PPPP"</td>
            <td>tagged-evidence-packet</td>
          </tr>
          <tr>
            <td>1463894560</td>
            <td>0x57415220</td>
            <td>"WAR "</td>
            <td>tagged-attestation-result</td>
          </tr>
        </tbody>
      </table>

      <t>
        These tags enable format detection without external metadata.
        Parsers can identify the packet type by examining the leading
        tag value.
      </t>
    </section>

    <section anchor="key-encoding">
      <name>Key Encoding Strategy</name>

      <t>
        The schema uses a dual key encoding strategy for efficiency
        and extensibility:
      </t>

      <dl>
        <dt>Integer Keys (1-99):</dt>
        <dd>
          Reserved for core protocol fields defined in this specification.
          Provides compact encoding and enables efficient parsing.
        </dd>

        <dt>String Keys:</dt>
        <dd>
          Used for vendor extensions, application-specific fields, and
          future protocol extensions before standardization. Provides
          self-describing field names at the cost of encoding size.
        </dd>
      </dl>

      <t>
        Example size comparison for a field named "forensics":
      </t>

      <artwork><![CDATA[
Integer key (11):     1 byte  (0x0B)
String key ("forensics"): 10 bytes (0x69666F72656E73696373)
]]></artwork>

      <t>
        For a typical Evidence packet with dozens of fields, integer
        keys reduce packet size by 20-40%.
      </t>
    </section>

    <section anchor="deterministic-encoding">
      <name>Deterministic Encoding</name>

      <t>
        Evidence packets MUST use deterministic CBOR encoding (RFC 8949 Section 4.2)
        (RFC 8949 Section 4.2) to enable:
      </t>

      <ul>
        <li>
          Byte-exact reproduction of packets for signature verification
        </li>
        <li>
          Consistent hashing for cache and deduplication purposes
        </li>
        <li>
          Simplified debugging and comparison
        </li>
      </ul>

      <t>
        Deterministic encoding requirements:
      </t>

      <ul>
        <li>
          Map keys sorted in bytewise lexicographic order
        </li>
        <li>
          Integers encoded in minimal representation
        </li>
        <li>
          Floating-point values canonicalized
        </li>
      </ul>
    </section>
      </section>

      <section anchor="eat-profile">
    <name>EAT Profile</name>

    <t>
      This specification defines an EAT (Entity Attestation Token)
      profile for Proof of Process evidence. The profile URI is:
    </t>

    <artwork><![CDATA[
https://example.com/rats/eat/profile/pop/1.0
]]></artwork>

    <section anchor="eat-claims">
      <name>Custom EAT Claims</name>

      <t>
        The following custom claims are proposed for IANA registration
        upon working group adoption:
      </t>

      <table>
        <thead>
          <tr>
            <th>Claim Name</th>
            <th>Type</th>
            <th>Description</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>pop-forensic-assessment</td>
            <td>uint</td>
            <td>forensic-assessment enumeration value</td>
          </tr>
          <tr>
            <td>pop-presence-score</td>
            <td>uint (millibits)</td>
            <td>Presence challenge pass rate (0-1000)</td>
          </tr>
          <tr>
            <td>pop-evidence-tier</td>
            <td>uint</td>
            <td>Evidence tier (1-4)</td>
          </tr>
          <tr>
            <td>pop-ai-composite-score</td>
            <td>uint (millibits)</td>
            <td>AI indicator composite score (0-1000)</td>
          </tr>
        </tbody>
      </table>
    </section>

    <section anchor="ar4si-extension">
      <name>AR4SI Trustworthiness Extension</name>

      <t>
        The Attestation Result includes a proposed extension to the
        AR4SI (<xref target="I-D.ietf-rats-ar4si"/>) trustworthiness
        vector:
      </t>

      <artwork><![CDATA[
behavioral-consistency: -1..3
      -1 = no claim
   0 = behavioral evidence inconsistent with human authorship
   1 = behavioral evidence inconclusive
   2 = behavioral evidence consistent with human authorship
   3 = behavioral evidence strongly indicates human authorship
]]></artwork>

      <t>
        This extension enables integration of witnessd Attestation
        Results with broader trustworthiness assessment frameworks.
      </t>

      <t>
        The following table provides guidance for mapping PPPP
        forensic-assessment verdicts to AR4SI behavioral-consistency
        values:
      </t>

      <table>
        <thead>
          <tr>
            <th>PPPP Verdict</th>
            <th>AR4SI behavioral-consistency</th>
            <th>Rationale</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>not-assessed (0)</td>
            <td>-1 (no claim)</td>
            <td>Verification not performed</td>
          </tr>
          <tr>
            <td>source-consistent (1)</td>
            <td>2 or 3</td>
            <td>2 for moderate confidence, 3 for high confidence</td>
          </tr>
          <tr>
            <td>source-consistent-partial (2)</td>
            <td>2</td>
            <td>Consistency with acceptable deviations</td>
          </tr>
          <tr>
            <td>inconclusive (3)</td>
            <td>1</td>
            <td>Insufficient evidence for determination</td>
          </tr>
          <tr>
            <td>source-transition-detected (4)</td>
            <td>1</td>
            <td>Transitions detected but not classified</td>
          </tr>
          <tr>
            <td>source-inconsistent (5)</td>
            <td>0</td>
            <td>Evidence inconsistent with single-source composition</td>
          </tr>
        </tbody>
      </table>

      <t>
        Note: The mapping from PPPP verdicts to AR4SI values depends on
        the confidence score and Relying Party policy. The table above
        provides default guidance; implementations MAY adjust based on
        domain-specific requirements.
      </t>
    </section>
      </section>

      <section anchor="evidence-model-security">
    <name>Security Considerations</name>

    <section anchor="tamper-evidence">
      <name>Tamper-Evidence vs. Tamper-Proof</name>

      <t>
        The evidence model provides tamper-EVIDENCE, not tamper-PROOF:
      </t>

      <ul>
        <li>
          <t>Tamper-evident:</t>
          <t>
            Modifications to Evidence packets are detectable through
            cryptographic verification. The hash chain,
            VDF entanglement,
            and HMAC bindings ensure that any alteration invalidates
            the Evidence.
          </t>
        </li>

        <li>
          <t>Not tamper-proof:</t>
          <t>
            An adversary with sufficient resources can fabricate
            Evidence by investing the computational time required
            by VDF proofs and generating plausible behavioral data.
            The forgery-cost-section quantifies this investment.
          </t>
        </li>
      </ul>

      <t>
        Relying Parties should understand this distinction when
        making trust decisions.
      </t>
    </section>

    <section anchor="verification-independence">
      <name>Independent Verification</name>

      <t>
        Evidence packets are designed for independent verification:
      </t>

      <ul>
        <li>
          All cryptographic proofs are included in the packet
        </li>
        <li>
          Verification requires no access to the original device
        </li>
        <li>
          Verification requires no network access (except for
          external anchor validation)
        </li>
        <li>
          Multiple independent Verifiers can appraise the same Evidence
        </li>
      </ul>

      <t>
        This property enables adversarial verification: a skeptical
        Relying Party can verify Evidence without trusting the
        Attester's infrastructure.
      </t>
    </section>

    <section anchor="privacy-construction">
      <name>Privacy by Construction</name>

      <t>
        The evidence model enforces privacy through structural
        constraints:
      </t>

      <ul>
        <li>
          <t>No content storage:</t>
          <t>
            Evidence contains hashes of document states, not content.
            The document itself is never included in Evidence packets.
          </t>
        </li>

        <li>
          <t>No keystroke capture:</t>
          <t>
            Individual characters typed are not recorded. Timing
            intervals are captured without association to specific
            characters.
          </t>
        </li>

        <li>
          <t>Aggregated behavioral data:</t>
          <t>
            Raw timing data is aggregated into histograms before
            inclusion in Evidence. Optional raw interval disclosure
            is user-controlled.
          </t>
        </li>

        <li>
          <t>No screenshots or screen recording:</t>
          <t>
            Visual content is never captured by the Attesting
            Environment.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="attester-trust">
      <name>Attesting Environment Trust</name>

      <t>
        The evidence model assumes a minimally trusted Attesting
        Environment:
      </t>

      <ul>
        <li>
          <t>Chain-verifiable claims (absence-claim-types 1-15):</t>
          <t>
            Can be verified from Evidence alone without trusting
            the AE beyond basic data integrity.
          </t>
        </li>

        <li>
          <t>Monitoring-dependent claims (absence-claim-types 16-63):</t>
          <t>
            Require trust that the AE accurately reported monitored
            events. The ae-trust-basis field documents these assumptions.
          </t>
        </li>
      </ul>

      <t>
        Hardware attestation (hardware-section) increases AE trust
        by binding Evidence to verified hardware identities.
      </t>
    </section>
      </section>
</section>


    <!-- Section 3: Jitter Seal (Behavioral Entropy) -->
    <section anchor="jitter-seal">
      <name>Jitter Seal: Captured Behavioral Entropy</name>

      <t>
        In this section, the Jitter Seal mechanism is delineated, a novel contribution to behavioral evidence within the RATS <xref target="RFC9334"/> architecture that binds captured timing entropy to the segment chain using HMAC-SHA256 <xref target="RFC2104"/> <xref target="RFC6234"/> commitments. Unlike injected entropy (random delays added by software that could be regenerated if the seed is known), actual measured timing from human input events is committed to by captured entropy, with the commitment computed using SHA-256 before histogram aggregation and bound to the VDF chain <xref target="Pietrzak2019"/> <xref target="Wesolowski2019"/> through the jitter-binding structure defined in CDDL <xref target="RFC8610"/>. This creates evidence encoded in CBOR <xref target="RFC8949"/> that cannot be regenerated without access to the original input stream, because the entropy-commitment fixes the raw timing data before any statistical summarization that might allow reconstruction.
      </t>

      <section anchor="jitter-design-principles">
        <name>Design Principles</name>

        <t>
          A fundamental limitation in existing attestation frameworks, including the base RATS architecture, is addressed by the Jitter Seal: the inability to distinguish evidence generated during genuine human interaction from evidence reconstructed after the fact. By cryptographically committing to captured timing entropy using SHA-256 before histogram aggregation, and binding this commitment to the VDF chain via HMAC, evidence is produced that bears an indelible relationship to the moment of its creation. Three key design principles guide the Jitter Seal mechanism: Captured vs. Injected Entropy distinguishes between injected entropy (random delays inserted by software that can be regenerated if the seed is known) and captured entropy that commits to timing measurements via SHA-256 that existed only at the moment of observation, meaning an adversary cannot regenerate captured entropy without access to the original input stream; Commitment Before Observation ensures that the entropy-commitment is computed using SHA-256 and bound to the segment chain via HMAC before the histogram summary is finalized, preventing an adversary from crafting statistics that match a predetermined commitment encoded in CBOR; and Privacy-Preserving Aggregation ensures that raw timing intervals are aggregated into histogram buckets defined in the CDDL schema, preserving statistical properties needed for entropy verification while preventing reconstruction of the original keystroke sequence, with raw intervals optionally disclosed per the user's privacy preferences.
        </t>
      </section>

      <section anchor="jitter-binding-structure">
        <name>Jitter Binding Structure</name>

        <t>
          Appearance of the jitter-binding structure in each checkpoint is mandated by this specification, with five fields being contained therein that together provide cryptographic binding between the behavioral entropy captured during authoring and the segment chain protected by SHA-256 hash linkage and VDF proofs. The structure is encoded using CBOR per the CDDL schema below, and employs HMAC-SHA256 for binding integrity that prevents jitter data from being transplanted between checkpoints.
        </t>

        <artwork type="cddl"><![CDATA[
    jitter-binding = {
        1 => hash-value,           ; entropy-commitment
        2 => [+ entropy-source],   ; sources
        3 => jitter-summary,       ; summary
        4 => bstr .size 32,        ; binding-mac
        ? 5 => [+ uint],           ; raw-intervals (optional)
        ? 6 => checkpoint-behavioral,     ; Per-checkpoint behavioral measurements
    }

    checkpoint-behavioral = {
        1 => ratio-millibits,      ; spectral-slope (pink noise alpha)
        2 => ratio-millibits,      ; hurst-exponent
        3 => ratio-millibits,      ; intra-checkpoint-consistency
        ? 4 => uint,               ; edit-operation-count
        ? 5 => uint,               ; composition-operation-count
        ? 6 => uint,               ; revision-operation-count
        ? 7 => uint,               ; structural-operation-count
    }
    ]]></artwork>

        <section anchor="entropy-commitment">
          <name>Entropy Commitment (Key 1)</name>

          <t>
            A cryptographic hash of the raw timing intervals concatenated in observation order is constituted by the entropy-commitment, computed as H(interval{0} || interval{1} || ... || interval{n}) where H denotes the hash algorithm specified in the hash-value structure with SHA-256 being RECOMMENDED. Each interval is encoded as a 32-bit unsigned integer representing milliseconds, conforming to the CBOR unsigned integer encoding (major type 0). Computation of this SHA-256 commitment BEFORE the histogram summary is mandated by this specification, thereby ensuring that the raw data cannot be manipulated to match a desired statistical profile after the commitment is fixed. This ordering constraint is critical to the security of the Jitter Seal mechanism: once the entropy-commitment is computed using SHA-256 and bound to the VDF input, the raw timing data is cryptographically fixed even though only the aggregated histogram appears in the final CBOR encoded Evidence packet.
          </t>
        </section>

        <section anchor="entropy-sources">
          <name>Entropy Sources (Key 2)</name>

          <t>
            Identification of which input modalities contributed
            to the captured entropy is accomplished by the sources array:
          </t>

          <table>
            <thead>
              <tr>
                <th>Value</th>
                <th>Source</th>
                <th>Description</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>1</td>
                <td>keystroke-timing</td>
                <td>Inter-key intervals from keyboard input</td>
              </tr>
              <tr>
                <td>2</td>
                <td>pause-patterns</td>
                <td>Gaps between editing bursts (&gt;2 seconds)</td>
              </tr>
              <tr>
                <td>3</td>
                <td>edit-cadence</td>
                <td>Rhythm of insertions/deletions over time</td>
              </tr>
              <tr>
                <td>4</td>
                <td>cursor-movement</td>
                <td>Navigation timing within document</td>
              </tr>
              <tr>
                <td>5</td>
                <td>scroll-behavior</td>
                <td>Document scrolling patterns</td>
              </tr>
              <tr>
                <td>6</td>
                <td>focus-changes</td>
                <td>Application focus gain/loss events</td>
              </tr>
            </tbody>
          </table>

          <t>
            Inclusion of at least one source is REQUIRED by implementations conforming to this RATS profile, with the source array encoded in CBOR per the CDDL schema. The highest entropy density is afforded by the keystroke-timing source (1), and its inclusion SHOULD be ensured when keyboard input is available, as this source provides the finest-grained timing measurements that contribute most significantly to the entropy-commitment computed using SHA-256. Multiple sources may be combined to increase the total entropy density and make possible verification even when some input modalities are unavailable, with the estimated-entropy-bits calculation aggregating Min-Entropy (H_min) across all contributing sources.
          </t>
        </section>

        <section anchor="jitter-summary">
          <name>Jitter Summary (Key 3)</name>

          <t>
            Verifiable statistics without exposure of raw timing data are afforded by the jitter-summary structure, encoded in CBOR per the CDDL schema below, enabling Verifiers to assess entropy sufficiency per the RATS architecture without accessing the privacy-sensitive raw intervals.
          </t>

          <artwork type="cddl"><![CDATA[
    jitter-summary = {
        1 => uint,                 ; sample-count
        2 => [+ histogram-bucket], ; timing-histogram
        3 => entropy-decibits,     ; estimated-entropy (decibits, /10 for bits)
        ? 4 => [+ anomaly-flag],   ; anomalies (if detected)
    }

    histogram-bucket = {
        1 => uint,                 ; lower-bound-ms
        2 => uint,                 ; upper-bound-ms
        3 => uint,                 ; count
    }
    ]]></artwork>

          <t>
            Calculation of the estimated-entropy-bits field is accomplished using Shannon
            entropy over the histogram distribution:
          </t>

          <artwork><![CDATA[
    H = -sum(p[ij] * log2(p[ij])) (conditional probabilities) for all buckets where p[i] > 0
    p[i] = count[i] / total_samples
    ]]></artwork>

          <t>
            The following bucket boundaries (in milliseconds) are RECOMMENDED:
            0, 50, 100, 200, 500, 1000, 2000, 5000, +infinity.
            The typical range of human typing and pause behavior is captured by these boundaries,
            having been determined empirically through analysis of diverse authoring sessions.
          </t>
        </section>

        <section anchor="binding-mac">
          <name>Binding MAC (Key 4)</name>

          <t>
            Cryptographic binding of the jitter data to
            the segment chain is accomplished by the binding-mac:
          </t>

          <artwork><![CDATA[
    binding-mac = HMAC-SHA256(
        key = checkpoint-chain-key,
        message = entropy-commitment ||
                  CBOR(sources) ||
                  CBOR(summary) ||
                  prev-tree-root
    )
    ]]></artwork>

          <t>
            The following properties are ensured by this HMAC-SHA256 binding within the RATS architecture: transplantation of jitter data between checkpoints is prevented because the prev-tree-root included in the HMAC input fixes the binding to a specific position in the SHA-256 hash chain; modification of jitter data without invalidating the segment chain is prevented because the binding-mac is included in the tree-root computation; and preservation of the temporal ordering of jitter observations is enforced because the VDF entanglement includes the entropy-commitment. These properties, taken together, make possible strong guarantees about the authenticity and integrity of the captured behavioral entropy, with any tampering detectable through cryptographic verification using SHA-256 and HMAC.
          </t>
        </section>

        <section anchor="raw-intervals">
          <name>Raw Intervals (Key 5, Optional)</name>

          <t>
            Inclusion of the raw-intervals array for enhanced
            verification is permitted but not required. When present, the following capabilities are afforded to verifiers:
            recomputation of the entropy-commitment with verification that it matches,
            recomputation of the histogram with consistency verification, and
            performance of statistical analysis beyond the histogram.
            As a privacy consideration, it should be noted that raw intervals may constitute
            biometric-adjacent data; this concern is addressed in <xref target="jitter-privacy"/>.
          </t>
        </section>
      </section>

      <section anchor="hardware-assurance">
        <name>Hardware Assurance Requirements</name>
        <t>
          High-assurance evidence (Process Score >= 0.9) requires specific
          hardware capabilities at the Attesting Environment:
        </t>
        <ul>
          <li>TPM 2.0: MUST support PCR banks with SHA-256 and provide
          signed quotes (TPM2_Quote) binding evidence to platform state.</li>
          <li>Secure Enclave: MUST provide hardware-backed key storage and
          monotonic counter operations.</li>
          <li>Certificate Validation: Verifiers MUST validate the Attester's
          Attestation Key against the manufacturer's Root CA.</li>
        </ul>
        <t>
          At higher assurance tiers (T3-T4), the hardware anchors evidence
          generation to specific physical silicon, preventing migration of
          evidence generation to faster or different hardware. At lower
          assurance tiers (T1-T2), evidence generation proceeds in software
          with reduced confidence scores. Evidence metadata MUST indicate
          the hardware assurance level so that Verifiers can adjust
          confidence accordingly.
        </t>
      </section>

      <section anchor="attestation-nonce">
        <name>Attestation Nonce Binding</name>
        <t>
          For hardware-attested evidence (T3-T4 tiers), a 32-byte
          cryptographically random attestation nonce MUST be generated
          at session initialization using a cryptographically secure
          random number generator. This attestation_nonce serves distinct
          purposes from the verifier_nonce:
        </t>
        <ul>
          <li>
            <t>TPM Quote Binding:</t>
            <t>
              The attestation_nonce is passed to TPM2_Quote operations,
              binding hardware attestation to this specific evidence session.
              This prevents replay of TPM quotes from previous sessions.
            </t>
          </li>
          <li>
            <t>TEE Session Binding:</t>
            <t>
              For Secure Enclave implementations, the attestation_nonce
              binds enclave attestation reports to the current session.
            </t>
          </li>
          <li>
            <t>Session Uniqueness:</t>
            <t>
              The attestation_nonce ensures each evidence generation session
              produces cryptographically distinct hardware attestations,
              even for identical content.
            </t>
          </li>
        </ul>
        <t>
          The attestation_nonce MUST be included in the evidence packet for
          hardware-attested evidence, enabling Verifiers to confirm the TPM
          quote or TEE attestation report matches the claimed session.
        </t>
      </section>

      <section anchor="timing-clipping">
        <name>Timing Value Clipping</name>
        <t>
          To prevent outlier timing samples from leaking sensitive behavioral
          information, all timing values are clipped to a normative range
          [0, 5000ms]. Values exceeding this range are coerced to the boundary.
          This bounds the sensitivity of timing data and provides consistent
          input ranges for behavioral analysis across all authoring environments.
        </t>
      </section>

      <section anchor="degraded-mode">
        <name>Software-Only Mode</name>
        <t>
          Implementations lacking access to a TEE or TPM operate in
          software-only mode. Evidence produced in this mode MUST be
          flagged with a maximum Process Score of 0.7. Verifiers SHOULD
          treat software-only evidence as a behavioral claim rather than
          a hardware-attested proof of platform binding. Software-only
          evidence still provides VDF temporal ordering, hash chain
          integrity, and behavioral entropy - the limitation is that
          these computations are not bound to specific hardware.
        </t>
      </section>
  </section>

  <section anchor="behavioral-entropy-analysis">
    <name>Behavioral Entropy Analysis</name>
    <t>
      The Attesting Environment computes behavioral entropy metrics
      locally from captured input timing intervals. These metrics
      characterize the statistical properties of the authoring process
      without recording keystroke content. All analysis is performed
      on the local device; no timing data leaves the Attesting
      Environment except as aggregated statistical summaries committed
      via HMAC.
    </t>

  <section anchor="timing-spectral-analysis">
    <name>Timing Spectral Analysis</name>
    <t>
      Human motor systems exhibit characteristic spectral properties
      in inter-keystroke timing intervals. The Attesting Environment
      computes the power spectral density of timing intervals and
      derives two metrics:
    </t>
    <dl>
      <dt>Pink noise slope (alpha):</dt>
      <dd>Human typing typically exhibits 1/f noise where power density
      inversely scales with frequency, with slope alpha between 0.8 and
      1.2. Mechanical injection tends toward white noise (alpha near 0)
      or periodic patterns (discrete frequency peaks).</dd>
      <dt>Hurst exponent (H):</dt>
      <dd>Computed via Rescaled Range (R/S) analysis or Detrended Fluctuation
      Analysis (DFA) of timing intervals. Human motor systems typically show
      H ∈ [0.55, 0.85], indicating long-range temporal dependence characteristic
      of natural behavioral processes. Values near 0.5 indicate white noise
      (rejection: likely synthetic or random input). Values approaching 1.0
      indicate highly deterministic sequences (rejection: likely automated or
      mechanical generation). Implementations MUST reject timing sequences
      with H outside the [0.55, 0.85] validation range.</dd>
    </dl>
    <t>
      These metrics are included in the behavioral entropy summary at
      each checkpoint. The Verifier evaluates them as informational
      signals contributing to the source consistency assessment, not
      as binary pass/fail gates.
    </t>
  </section>

  <section anchor="intra-session-consistency">
    <name>Intra-Session Consistency</name>
    <t>
      The Attesting Environment evaluates statistical stability of
      authoring behavior across checkpoints within a session. Each
      checkpoint's behavioral digest captures a timing distribution.
      The Attesting Environment computes the statistical distance
      (KL Divergence) between each checkpoint's distribution and
      the cumulative session baseline.
    </t>
    <t>
      The Intra-Session Consistency score (C_intra) is high when
      timing distributions remain within a stable statistical cluster.
      Significant divergence (exceeding a configurable threshold)
      indicates a potential change in the generative process. This
      divergence is recorded in the evidence chain as a source
      consistency transition event - the Attesting Environment does
      not interpret the cause of the divergence.
    </t>
  </section>

  <section anchor="temporal-evolution">
    <name>Temporal Evolution of Behavioral Metrics</name>
    <t>
      Interactive authoring sessions exhibit characteristic temporal
      evolution over extended durations. The Attesting Environment
      tracks variance evolution across checkpoints:
    </t>
    <ul>
      <li>Timing variance typically increases over multi-hour sessions
      due to motor fatigue.</li>
      <li>The Hurst exponent may drift toward 0.5 as fatigue
      reduces long-range motor correlation.</li>
      <li>Error rate (ratio of deletions to insertions in a sliding
      window) typically increases over time.</li>
    </ul>
    <t>
      These evolution patterns are included in the evidence chain as
      informational metrics. The absence of temporal evolution in a
      long session is a source consistency signal - not proof of
      fabrication, but a measurable characteristic that the Relying
      Party can evaluate in context.
    </t>
  </section>
  </section>

  <section anchor="clock-integrity">
    <name>Clock Integrity</name>
    <t>
      To harden against clock spoofing at the kernel or hypervisor
      level, the Attesting Environment employs Clock-Entropy
      Entanglement (CEE). Rather than reporting raw timestamps,
      the Attesting Environment generates an entropic pulse for
      each checkpoint:
    </t>
    <artwork><![CDATA[
P = HMAC(K_session, DST_CLOCK || timestamp || hardware_entropy)
]]></artwork>
    <t>
      By binding the monotonic timestamp to non-deterministic hardware
      entropy (where available), the protocol ensures that clock
      manipulation produces cryptographic mismatches in the VDF chain.
      In software-only mode, system-provided entropy sources are used
      with correspondingly reduced assurance.
    </t>
  </section>

  <section anchor="privacy-preserving-timing">
    <name>Privacy-Preserving Timing Protection</name>
    <t>
      To prevent cross-session correlation of behavioral timing
      patterns, the Attesting Environment applies a session-specific
      non-linear transformation to timing metrics before committing
      them to the evidence chain:
    </t>
    <artwork><![CDATA[
T_committed = Transform(T_measured, K_session)
]]></artwork>
    <t>
      The transformation preserves the internal statistical properties
      required for source consistency analysis (relative distributions,
      spectral characteristics, evolution patterns) while altering
      absolute values that could serve as a biometric fingerprint.
      The Verifier, possessing the session key derivation material,
      can evaluate consistency properties without recovering the
      original timing values.
    </t>
  </section>

  <section anchor="error-topology">
    <name>Error Topology and Fractal Invariants</name>
    <t>
      The "Error Topology" invariant captures the physio-biological signature of human
      mistakes. Unlike mathematical randomness, human typos follow a "Fractal Error
      Pattern" comprising four phases: Physical Mistake (e.g., adjacent key strike,
      <xref target="Grudin1983"/>) -> Cognitive Recognition Gap (avg 150ms saccade-feedback loop,
      <xref target="Rayner1998"/>) -> Reflexive Burst (rapid backspacing) -> Correction.
    </t>
    <t>
      The Evidence includes a ZK-Proof (STARK) attesting that the distribution of
      deletions and corrections satisfy a composite biological score S >= 0.75,
      derived from Spearman correlation of gaps to complexity (rho_gap), the Hurst
      exponent (H) for self-similar persistence <xref target="Mandelbrot1982"/>, and the physical
      adjacency ratio (adj_phys):
    </t>
    <artwork><![CDATA[
S = 0.4*rho_gap + 0.4*H + 0.2*adj_phys >= 0.75
]]></artwork>
    <t>
      This topology proves that the editing process adheres to biological motor-skill
      constraints, which are computationally expensive for non-biological agents to
      simulate within the sequential constraints of the VDF chain.
    </t>
  </section>

  <section anchor="semantic-correlation">
    <name>Cognitive Load and Semantic Correlation</name>
    <t>
      The Behavioral Consistency invariant is grounded in the neurobiological constraint of
      Cognitive Load Delays (CLD). Human typing exhibits 50-300ms inter-keystroke
      spikes when processing complex or rare tokens <xref target="Kushniruk1991"/>.
    </t>
    <t>
      Verification of human origin is achieved through the correlation of Information
      Density (D) and Timing Density. To protect author privacy, timing histograms are ZK-Private inputs; only the cryptographic commitment is revealed. Segments with LZ Complexity &lt; 0.2 are excluded from scoring (Complexity Gating) to prevent Signal Starvation.(tau) per segment i:
    </t>
    <artwork><![CDATA[
D_i = LZ_Complexity(s_i) / |s_i|
]]></artwork>
    <t>
      Where s_i is the segment content, and D is the normalized compression ratio (zlib) of the
      deterministic Lempel-Ziv (LZ) complexity algorithm (RFC 1951). The Jitter Seal reports ranked delays
      (tau_i) in quantized buckets. The Evidence is valid if and only if the
      Spearman rank correlation satisfies:
    </t>
    <artwork><![CDATA[
rho(D, tau) = corr(rank(D), rank(tau)) >= theta = 0.7
]]></artwork>
    <t>
      This mechanism ensures that "Semantic Spikes" in timing align with spikes in
      linguistic complexity.
    </t>
  </section>

  <section anchor="zk-cognitive-load-verification">
    <name>Zero-Knowledge Cognitive Load Verification</name>

    <t>
      The Spearman correlation verification described in
      <xref target="semantic-correlation"/> requires access to both the timing
      histogram (pause durations per segment) and the complexity histogram
      (LZ compression ratios per segment) to compute rho. This requirement
      creates a tension with the content-agnosticism principle: while the
      timing data is already ZK-private via bucket commitments, revealing
      the complexity histogram to a Verifier would disclose information
      about the document's linguistic structure, potentially enabling
      reconstruction of content patterns or stylometric fingerprinting.
    </t>

    <section anchor="zk-cog-problem">
      <name>Problem Statement</name>

      <t>
        The core challenge is proving that a correlation exists between two
        private datasets without revealing either dataset:
      </t>

      <ul>
        <li>
          The pause histogram (tau) captures inter-keystroke intervals
          aggregated into timing buckets. This data is privacy-sensitive
          as it may constitute biometric-adjacent behavioral information.
        </li>
        <li>
          The complexity histogram (D) captures normalized LZ compression
          ratios per segment. While derived from content, revealing the
          distribution exposes structural information about the document.
        </li>
        <li>
          The Verifier needs assurance that rho(D, tau) >= 0.7 without
          learning either D or tau individually.
        </li>
      </ul>

      <t>
        Zero-knowledge proofs resolve this tension by enabling the Attester
        to prove the correlation relationship holds while revealing only the
        Boolean outcome (satisfied/not satisfied) and a confidence band.
      </t>
    </section>

    <section anchor="zk-cog-snark">
      <name>SNARK-Based Verification (Maximum Tier)</name>

      <t>
        For Maximum tier Evidence, a Succinct Non-interactive ARgument of
        Knowledge (SNARK) is employed to prove the correlation claim.
        The circuit encodes:
      </t>

      <artwork><![CDATA[
Public inputs:
  - threshold: 700 (representing rho >= 0.7)
  - segment_count: n
  - pause_commitment: H(tau_1, ..., tau_n)
  - complexity_commitment: H(D_1, ..., D_n)

Private inputs (witness):
  - tau[]: pause histogram values
  - D[]: complexity histogram values

Circuit constraints:
  1. H(tau[]) == pause_commitment
  2. H(D[]) == complexity_commitment
  3. rank(tau[]) computed correctly
  4. rank(D[]) computed correctly
  5. spearman_rho(rank(tau), rank(D)) >= threshold
]]></artwork>

      <t>
        The SNARK proof is approximately 200-300 bytes (for Groth16) or
        1-2 KB (for PLONK/STARK variants) and verifies in constant time.
        The public-parameters-hash binds the proof to a specific circuit
        version, enabling algorithm agility while preventing substitution
        attacks.
      </t>

      <t>
        SNARK verification is computationally efficient for Verifiers
        (milliseconds) but proof generation requires significant Attester
        resources (seconds to minutes depending on segment count).
        This asymmetry is acceptable for the Maximum tier where the
        strongest assurance is required.
      </t>
    </section>

    <section anchor="zk-cog-pedersen">
      <name>Pedersen Commitment Fallback (Enhanced Tier)</name>

      <t>
        For Enhanced tier Evidence where SNARK proving infrastructure
        may not be available, a Pedersen commitment scheme with Bulletproof
        range proofs provides weaker but still meaningful ZK assurance:
      </t>

      <dl>
        <dt>commitment-rho:</dt>
        <dd>
          Pedersen commitment to the computed Spearman rho value,
          C = g^rho * h^r where r is the nonce.
        </dd>

        <dt>commitment-pause-histogram:</dt>
        <dd>
          Pedersen commitment to the pause histogram vector,
          binding the Attester to specific timing data.
        </dd>

        <dt>commitment-complexity-histogram:</dt>
        <dd>
          Pedersen commitment to the complexity histogram vector,
          binding the Attester to specific structural data.
        </dd>

        <dt>range-proofs:</dt>
        <dd>
          Bulletproof range proofs demonstrating that:
          (a) rho falls within [-1.0, 1.0],
          (b) rho >= threshold (0.7),
          (c) all histogram values are non-negative.
        </dd>

        <dt>consistency-binding-proof:</dt>
        <dd>
          Proof that the committed rho was correctly computed from
          the committed histograms using Spearman's formula.
        </dd>
      </dl>

      <t>
        The Pedersen approach requires larger proofs (1-5 KB depending on
        segment count) and provides computational hiding rather than
        perfect zero-knowledge. However, it uses well-established
        elliptic curve cryptography without trusted setup requirements.
      </t>
    </section>

    <section anchor="zk-cog-claims">
      <name>What ZK Proofs Do and Do Not Claim</name>

      <t>
        Zero-knowledge cognitive load proofs establish the following:
      </t>

      <dl>
        <dt>Correlation Verified (PROVEN):</dt>
        <dd>
          The Spearman rank correlation between the Attester's private
          pause histogram and private complexity histogram meets or
          exceeds the threshold. This is cryptographically bound.
        </dd>

        <dt>Data Consistency (PROVEN):</dt>
        <dd>
          The committed histograms match those used in correlation
          computation. The Attester cannot claim correlation with
          fabricated data without invalidating the proof.
        </dd>

        <dt>Confidence Band (DOCUMENTED):</dt>
        <dd>
          Statistical confidence intervals account for sample size
          effects and provide Verifiers with uncertainty bounds.
        </dd>
      </dl>

      <t>
        Zero-knowledge cognitive load proofs explicitly do NOT claim:
      </t>

      <dl>
        <dt>Cognitive Origin:</dt>
        <dd>
          Correlation is consistent with but does not prove cognitive
          engagement. The proof establishes a statistical relationship,
          not a causal mechanism. Sophisticated simulation could
          potentially produce correlated timing, though at significant
          computational cost (see <xref target="forgery-cost-bounds"/>).
        </dd>

        <dt>Human Authorship:</dt>
        <dd>
          No claim is made that a human (as opposed to a sufficiently
          sophisticated automation) produced the input. The proof
          documents observable correlation, not its source.
        </dd>

        <dt>Content Quality:</dt>
        <dd>
          The proof says nothing about the semantic quality, originality,
          or value of the document content. It attests only to process
          characteristics.
        </dd>

        <dt>Absence of Assistance:</dt>
        <dd>
          The proof does not exclude the possibility that the author
          used tools, references, or other aids during creation.
          It documents the observable editing process, not the author's
          cognitive sources.
        </dd>
      </dl>
    </section>

    <section anchor="zk-cog-tier-mapping">
      <name>Evidence Tier Mapping</name>

      <t>
        The correlation-algorithm enumeration maps to evidence tiers:
      </t>

      <table>
        <thead>
          <tr>
            <th>Algorithm</th>
            <th>Tier</th>
            <th>ZK Property</th>
            <th>Verifier Trust</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>no-proof (0)</td>
            <td>Basic</td>
            <td>None</td>
            <td>Trusts Attester's rho claim</td>
          </tr>
          <tr>
            <td>spearman-pedersen (2)</td>
            <td>Enhanced</td>
            <td>Computational hiding</td>
            <td>Verifies commitment consistency</td>
          </tr>
          <tr>
            <td>spearman-snark (1)</td>
            <td>Maximum</td>
            <td>Perfect ZK</td>
            <td>Cryptographic proof of relation</td>
          </tr>
        </tbody>
      </table>

      <t>
        Verifiers SHOULD require ZK proofs (algorithm > 0) for Enhanced
        and Maximum tier claims. Basic tier Evidence with no-proof is
        suitable only for contexts where the Attesting Environment is
        fully trusted or where process documentation rather than
        adversarial verification is the goal.
      </t>
    </section>

    <section anchor="zk-cog-scope-limits">
      <name>Explicit Scope Limitations</name>

      <t>
        Per the architectural constraints in this specification, the
        ZK cognitive load verification mechanism:
      </t>

      <ul>
        <li>
          Does NOT perform AI detection or classification. The mechanism
          documents correlation patterns without inferring their source.
        </li>
        <li>
          Does NOT make stylometric claims. Linguistic analysis of content
          is explicitly out of scope.
        </li>
        <li>
          Does NOT infer intent or cognitive state. Observable timing
          correlation is documented, not interpreted.
        </li>
        <li>
          Does NOT capture document content. Both histograms are derived
          measurements, not content reproductions.
        </li>
        <li>
          Does NOT provide surveillance capabilities. Aggregate statistics
          are verified, not raw input streams.
        </li>
      </ul>

      <t>
        Evidence generated through this mechanism is tamper-evident and
        independently verifiable, but interpretation of what the evidence
        means remains the responsibility of Relying Parties applying their
        own policies and risk tolerances.
      </t>
    </section>
  </section>

  <section anchor="biology-invariant-parameters">
    <name>Biology Invariant Parameter Configuration</name>

    <t>
      The composite biological score formula presented in <xref target="error-topology"/>
      uses hardcoded weights and thresholds that lack empirical validation.
      To enable transparent evolution of these parameters as research matures,
      this section defines a versioned parameter configuration structure with
      explicit confidence levels indicating the validation status of each parameter.
    </t>

    <section anchor="bio-param-validation-status">
      <name>Validation Status Taxonomy</name>

      <t>
        Each parameter set carries a validation-status indicator that communicates
        the epistemic basis for the parameter values to Verifiers and Relying Parties:
      </t>

      <dl>
        <dt>Empirical (1):</dt>
        <dd>
          Parameters validated through published peer-reviewed studies with
          reproducible methodology. The validation-reference field MUST contain
          a DOI or equivalent stable identifier for the validating research.
        </dd>

        <dt>Theoretical (2):</dt>
        <dd>
          Parameters derived from established literature on human motor control,
          cognitive load, or psycholinguistics, but not directly validated for
          the specific use case of behavioral attestation. This is the current
          status of all parameters defined in this specification.
        </dd>

        <dt>Unsupported (3):</dt>
        <dd>
          Parameters that are placeholders or heuristics without theoretical
          or empirical basis. Relying Parties SHOULD treat claims using
          unsupported parameters with heightened skepticism and MAY reject
          such evidence entirely depending on policy.
        </dd>
      </dl>
    </section>

    <section anchor="bio-param-structure">
      <name>Parameter Configuration Structure</name>

      <t>
        The biology-scoring-parameters structure encapsulates all configurable
        parameters for the biological invariant evaluation, encoded in CBOR
        per the following CDDL schema:
      </t>

      <artwork type="cddl"><![CDATA[
; Biology Invariant Scoring Parameters
; Version: v1.0-draft (validation-status: theoretical)

biology-scoring-parameters = {
    1 => tstr,                 ; version (e.g., "v1.0-draft")
    2 => weight-config,        ; scoring weights
    3 => threshold-config,     ; threshold values
    4 => validation-status,    ; epistemic basis
    ? 5 => tstr,               ; validation-reference (DOI/URL)
    ? 6 => context-profile,    ; optional context-specific profile
}

weight-config = {
    1 => uint,                 ; rho-gap-weight-millibits (400 = 0.4)
    2 => uint,                 ; hurst-weight-millibits (400 = 0.4)
    3 => uint,                 ; adj-phys-weight-millibits (200 = 0.2)
}

threshold-config = {
    1 => uint,                 ; composite-score-min-millibits (750 = 0.75)
    2 => uint,                 ; rho-correlation-min-millibits (700 = 0.7)
    3 => uint,                 ; pink-noise-slope-min-millibits (800 = 0.8)
    4 => uint,                 ; pink-noise-slope-max-millibits (1200 = 1.2)
    5 => uint,                 ; hurst-min-millibits (550 = 0.55)
    6 => uint,                 ; hurst-max-millibits (850 = 0.85)
    ? 7 => uint,               ; lz-complexity-min-millibits (200 = 0.2)
}

validation-status = &(
    empirical: 1,              ; Validated via published study
    theoretical: 2,            ; Based on literature, not validated
    unsupported: 3,            ; Parameters need validation
)

context-profile = &(
    default_v1: 1,             ; General-purpose defaults
    prose_v1: 2,               ; Optimized for natural language prose
    technical_v1: 3,           ; Optimized for code/technical content
    mixed_v1: 4,               ; Mixed prose and technical content
)

; Biology invariant claim structure for inclusion in Evidence
biology-invariant-claim = {
    1 => uint,                 ; score-millibits (computed composite score)
    2 => validation-status,    ; parameter validation level
    3 => tstr,                 ; parameters-version
    4 => bstr,                 ; parameters-hash (SHA-256 of params)
    ? 5 => [* tstr],           ; context-warnings
    ? 6 => context-profile,    ; profile used for evaluation
}
]]></artwork>
    </section>

    <section anchor="bio-param-current">
      <name>Current Parameter Values (v1.0-draft)</name>

      <t>
        The following parameter values are defined for version "v1.0-draft".
        All parameters carry validation-status: theoretical (2), indicating
        they are derived from literature but not empirically validated for
        behavioral attestation:
      </t>

      <table>
        <name>Default Profile (default_v1) Parameters</name>
        <thead>
          <tr>
            <th>Parameter</th>
            <th>Millibits Value</th>
            <th>Decimal Equivalent</th>
            <th>Literature Basis</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>rho-gap-weight</td>
            <td>400</td>
            <td>0.4</td>
            <td>Grudin 1983 (error patterns)</td>
          </tr>
          <tr>
            <td>hurst-weight</td>
            <td>400</td>
            <td>0.4</td>
            <td>Mandelbrot 1982 (fractal time series)</td>
          </tr>
          <tr>
            <td>adj-phys-weight</td>
            <td>200</td>
            <td>0.2</td>
            <td>QWERTY adjacency heuristic</td>
          </tr>
          <tr>
            <td>composite-score-min</td>
            <td>750</td>
            <td>0.75</td>
            <td>Heuristic threshold</td>
          </tr>
          <tr>
            <td>rho-correlation-min</td>
            <td>700</td>
            <td>0.7</td>
            <td>Kushniruk 1991 (cognitive load)</td>
          </tr>
          <tr>
            <td>pink-noise-slope-min</td>
            <td>800</td>
            <td>0.8</td>
            <td>1/f noise literature</td>
          </tr>
          <tr>
            <td>pink-noise-slope-max</td>
            <td>1200</td>
            <td>1.2</td>
            <td>1/f noise literature</td>
          </tr>
          <tr>
            <td>hurst-min</td>
            <td>550</td>
            <td>0.55</td>
            <td>Mandelbrot 1982; empirical validation</td>
          </tr>
          <tr>
            <td>hurst-max</td>
            <td>850</td>
            <td>0.85</td>
            <td>Mandelbrot 1982; empirical validation</td>
          </tr>
          <tr>
            <td>lz-complexity-min</td>
            <td>200</td>
            <td>0.2</td>
            <td>Signal starvation prevention</td>
          </tr>
        </tbody>
      </table>

      <t>
        IMPORTANT: These parameters are designated validation-status: theoretical (2).
        The weights (0.4, 0.4, 0.2) were selected based on general principles from
        motor control and cognitive load literature, NOT from empirical validation
        against adversarial simulation or controlled authorship studies. Relying
        Parties SHOULD interpret biological invariant claims accordingly and MAY
        apply additional policy constraints for high-stakes verification contexts.
      </t>
    </section>

    <section anchor="bio-param-profiles">
      <name>Context-Specific Profiles</name>

      <t>
        Different content types exhibit different behavioral signatures.
        The following profiles provide context-specific parameter adjustments:
      </t>

      <section anchor="bio-param-prose">
        <name>Prose Profile (prose_v1)</name>
        <t>
          Optimized for natural language prose authorship. Assumes higher
          cognitive load variation during complex sentence construction and
          lower physical adjacency errors compared to code. Uses default
          parameters with the following adjustments:
        </t>
        <ul>
          <li>rho-gap-weight: 450 (0.45) - Higher weight on cognitive correlation</li>
          <li>hurst-weight: 400 (0.4) - Unchanged</li>
          <li>adj-phys-weight: 150 (0.15) - Lower weight on physical adjacency</li>
        </ul>
        <t>
          Validation-status: unsupported (3). These adjustments are hypothetical
          and require empirical validation.
        </t>
      </section>

      <section anchor="bio-param-technical">
        <name>Technical Profile (technical_v1)</name>
        <t>
          Optimized for source code and technical content. Assumes higher
          physical adjacency error rates due to specialized characters and
          lower cognitive load correlation due to repetitive syntax patterns.
        </t>
        <ul>
          <li>rho-gap-weight: 300 (0.3) - Lower weight on cognitive correlation</li>
          <li>hurst-weight: 400 (0.4) - Unchanged</li>
          <li>adj-phys-weight: 300 (0.3) - Higher weight on physical adjacency</li>
          <li>composite-score-min: 700 (0.70) - Lower threshold for code</li>
        </ul>
        <t>
          Validation-status: unsupported (3). These adjustments are hypothetical
          and require empirical validation.
        </t>
      </section>
    </section>

    <section anchor="bio-param-versioning">
      <name>Parameter Versioning</name>

      <t>
        Parameter version strings follow the format "v{major}.{minor}-{status}"
        where status is one of "draft", "experimental", or "stable":
      </t>

      <ul>
        <li>
          <strong>draft:</strong> Initial parameters under active development.
          MAY change without notice. Suitable only for testing.
        </li>
        <li>
          <strong>experimental:</strong> Parameters undergoing validation studies.
          Changes require documentation. Suitable for non-adversarial contexts.
        </li>
        <li>
          <strong>stable:</strong> Parameters with empirical validation.
          Changes require major version increment. Suitable for adversarial review.
        </li>
      </ul>

      <t>
        Implementations MUST include the parameters-hash in biology-invariant-claim
        to enable Verifiers to confirm which exact parameter values were used,
        regardless of version string. The hash is computed as:
      </t>

      <artwork><![CDATA[
parameters-hash = SHA-256(
    CBOR-encode(biology-scoring-parameters)
)
]]></artwork>

      <t>
        Verifiers SHOULD maintain a registry of known parameter hashes and their
        associated validation status to enable policy-based acceptance or rejection
        of Evidence using specific parameter versions.
      </t>
    </section>

    <section anchor="bio-param-research-limitations">
      <name>Research Limitations Acknowledgment</name>

      <t>
        The behavioral invariant parameters defined in this specification are
        subject to the following research limitations that Relying Parties
        MUST consider when interpreting biological invariant claims:
      </t>

      <ol>
        <li>
          <strong>No adversarial validation:</strong> Parameters have not been
          tested against sophisticated simulation attacks. An adversary with
          knowledge of the scoring formula could potentially craft timing
          patterns that satisfy the thresholds.
        </li>
        <li>
          <strong>Population variance:</strong> Human typing behavior varies
          significantly across individuals, input devices, fatigue levels,
          and content types. Fixed thresholds may produce false negatives
          for atypical but genuine authors.
        </li>
        <li>
          <strong>Content dependence:</strong> The correlation between cognitive
          load and timing delays depends on content complexity. Highly
          formulaic content (forms, templates, repetitive text) may not
          exhibit expected behavioral signatures.
        </li>
        <li>
          <strong>Device dependence:</strong> Timing resolution and jitter
          characteristics vary across input devices and platforms, potentially
          affecting score reproducibility.
        </li>
        <li>
          <strong>Literature extrapolation:</strong> Referenced studies
          (Grudin 1983, Mandelbrot 1982, Kushniruk 1991, Rayner 1998) address
          related phenomena but were not designed for behavioral attestation.
          Extrapolation to this context requires validation.
        </li>
      </ol>

      <t>
        Future versions of this specification MAY update parameters based on
        empirical research. Implementations SHOULD support parameter version
        negotiation to enable graceful migration as the evidence base matures.
      </t>
    </section>

    <section anchor="active-probes">
      <name>Active Behavioral Probes</name>

      <t>
        Beyond passive timing analysis, implementations MAY employ active
        behavioral probes that analyze response characteristics to specific
        interaction patterns. These probes provide additional validation
        signals that are difficult to synthesize without genuine human
        motor system involvement.
      </t>

      <section anchor="galton-invariant">
        <name>Galton Invariant Probe</name>

        <t>
          The Galton Invariant measures rhythm perturbation recovery by
          analyzing how quickly and consistently timing returns to baseline
          after disruption events (e.g., errors, pauses, context switches).
          Named after the Galton board's probability distribution, this
          probe characterizes the "absorption coefficient" of behavioral
          rhythm perturbations.
        </t>

        <t>Parameters:</t>
        <ul>
          <li>
            <strong>Absorption coefficient (α):</strong> Valid range
            α ∈ [0.3, 0.8]. Values below 0.3 indicate insufficient rhythm
            recovery (possibly synthetic constant-rate input). Values above
            0.8 indicate excessive damping (possibly mechanically smoothed).
          </li>
          <li>
            <strong>Time constant (τ):</strong> Recovery half-life in
            milliseconds. Typical human values: 200-800ms.
          </li>
          <li>
            <strong>Asymmetry factor:</strong> Ratio of positive to negative
            perturbation recovery. Human motor systems typically show mild
            asymmetry (factor 0.8-1.2).
          </li>
        </ul>
      </section>

      <section anchor="reflex-gate">
        <name>Reflex Gate Probe</name>

        <t>
          The Reflex Gate measures minimum achievable latency and its
          variability, characterizing neural pathway delay constraints.
          Human motor responses exhibit floor latencies imposed by
          physiological signal propagation that cannot be bypassed by
          simulation.
        </t>

        <t>Parameters:</t>
        <ul>
          <li>
            <strong>Minimum latency:</strong> MUST be ≥ 100ms for valid
            human input. Latencies consistently below 100ms indicate
            either hardware/software injection or pre-computed responses.
          </li>
          <li>
            <strong>Coefficient of variation (CV):</strong> Valid range
            CV ∈ [0.15, 0.40]. Values below 0.15 indicate mechanical
            consistency. Values above 0.40 indicate either extreme
            fatigue or non-physiological variation patterns.
          </li>
          <li>
            <strong>Distribution shape:</strong> Human reaction times
            follow ex-Gaussian distributions. Significant deviation from
            this shape indicates synthetic generation.
          </li>
        </ul>
      </section>

      <section anchor="active-probes-security">
        <name>Active Probe Security Considerations</name>

        <t>
          Active probes increase the cost of successful simulation but
          do not provide absolute guarantees:
        </t>
        <ul>
          <li>
            Adversaries with knowledge of probe parameters could craft
            timing sequences that satisfy the validation ranges.
          </li>
          <li>
            Probe parameters are based on typical human physiology; atypical
            but genuine users may produce out-of-range values.
          </li>
          <li>
            Implementations SHOULD use active probes as supplementary
            signals rather than hard rejection criteria.
          </li>
        </ul>
      </section>
    </section>

    <section anchor="labyrinth-structure">
      <name>Labyrinth Structure Analysis</name>

      <t>
        The Labyrinth structure applies dynamical systems theory to
        characterize the phase space topology of timing sequences.
        Based on Takens' theorem for delay-coordinate embedding,
        this analysis reconstructs attractor geometry from the
        one-dimensional timing series, enabling detection of
        non-linear behavioral dynamics that are difficult to synthesize.
      </t>

      <section anchor="takens-embedding">
        <name>Delay-Coordinate Embedding</name>

        <t>
          Given a timing interval sequence {t_1, t_2, ..., t_n}, the
          phase space reconstruction creates m-dimensional vectors:
        </t>

        <artwork><![CDATA[
    v_i = (t_i, t_{i+τ}, t_{i+2τ}, ..., t_{i+(m-1)τ})
        ]]></artwork>

        <t>
          where m is the embedding dimension and τ is the delay parameter.
          Parameters:
        </t>
        <ul>
          <li>
            <strong>Embedding dimension (m):</strong> Valid range 3-8.
            Lower values may miss attractor structure; higher values
            introduce spurious correlations.
          </li>
          <li>
            <strong>Delay parameter (τ):</strong> Selected via mutual
            information minimization or autocorrelation zero-crossing.
          </li>
        </ul>
      </section>

      <section anchor="topological-invariants">
        <name>Topological Invariants</name>

        <t>
          The reconstructed phase space is characterized by topological
          invariants that distinguish genuine behavioral dynamics from
          synthetic sequences:
        </t>

        <dl>
          <dt>Correlation dimension (D_2):</dt>
          <dd>
            Measures the complexity of the attractor. Valid range:
            D_2 ∈ [1.5, 5.0]. Values near 1.0 indicate deterministic
            periodic behavior; values near the embedding dimension
            indicate stochastic noise.
          </dd>

          <dt>Betti numbers (β_0, β_1, β_2):</dt>
          <dd>
            Topological invariants counting connected components (β_0),
            loops (β_1), and voids (β_2) in the attractor. Human behavioral
            attractors typically show specific Betti number patterns
            reflecting cognitive-motor coupling dynamics.
          </dd>

          <dt>Recurrence rate:</dt>
          <dd>
            Fraction of recurrence points in the reconstructed phase space.
            Synthetic sequences often show either too high (periodic) or
            too low (random) recurrence rates compared to genuine
            behavioral dynamics.
          </dd>

          <dt>Determinism:</dt>
          <dd>
            Fraction of recurrence points forming diagonal lines in
            recurrence plots. Genuine behavioral sequences show
            intermediate determinism reflecting cognitive influence
            on motor timing.
          </dd>
        </dl>
      </section>

      <section anchor="labyrinth-security">
        <name>Labyrinth Analysis Security Considerations</name>

        <t>
          Phase space analysis provides additional forgery cost but is
          not a complete defense:
        </t>
        <ul>
          <li>
            Adversaries could potentially train generative models to
            produce timing sequences with specific topological properties.
          </li>
          <li>
            The computational cost of labyrinth analysis is significant;
            implementations MAY perform this analysis only for high-value
            evidence or as a secondary verification step.
          </li>
          <li>
            Topological analysis is sensitive to sequence length;
            short sessions may not provide sufficient data for reliable
            invariant estimation.
          </li>
        </ul>
      </section>
    </section>

    <section anchor="bio-param-interpretation">
      <name>Guidance for Interpreting Unsupported Confidence Levels</name>

      <t>
        When Evidence contains biology-invariant-claim with validation-status:
        unsupported (3), Verifiers and Relying Parties SHOULD apply the
        following interpretation guidance:
      </t>

      <ul>
        <li>
          The claim SHOULD NOT be treated as dispositive evidence of human
          authorship or the absence thereof.
        </li>
        <li>
          The claim MAY contribute to a broader forensic assessment when
          combined with other evidence types (VDF temporal bounds, external
          anchors, hardware attestation).
        </li>
        <li>
          High-stakes verification contexts (legal proceedings, academic
          integrity decisions with significant consequences) SHOULD require
          at least validation-status: theoretical (2) and preferably
          validation-status: empirical (1).
        </li>
        <li>
          Policy engines MAY define minimum validation-status thresholds
          for claim acceptance, expressed in the trust-policy structure
          defined in <xref target="trust-policies"/>.
        </li>
        <li>
          Attestation Results SHOULD include a caveat when biological
          invariant claims rely on unsupported parameters, using the
          caveats mechanism defined in <xref target="caveats"/>.
        </li>
      </ul>
    </section>
  </section>


  <section anchor="jitter-vdf-entanglement">
    <name>VDF Entanglement</name>

    <t>
      The Jitter Seal achieves temporal binding through entanglement
      with the VDF proof chain. The VDF input for segment N
      includes the jitter binding from segment N:
    </t>

    <artwork><![CDATA[
VDF_input{N} = H(
    tree-root{N-1} ||
    content-hash{N} ||
    jitter-binding{N}.entropy-commitment
)

VDF_output{N} = VDF(VDF_input{N}, iterations{N})
]]></artwork>

    <t>
      This entanglement creates a causal dependency: the VDF output
      cannot be computed until the jitter entropy is captured and
      committed. Combined with the VDF's sequential computation
      requirement, this ensures that:
    </t>

    <ol>
      <li>
        The jitter data existed before the VDF computation began
      </li>
      <li>
        The checkpoint cannot be backdated without recomputing the
        entire VDF chain from that point forward
      </li>
      <li>
        The minimum time between checkpoints is bounded by VDF
        computation time plus jitter observation time
      </li>
    </ol>
  </section>

  <section anchor="jitter-verification">
    <name>Verification Procedure</name>

    <t>
      A Verifier appraises the Jitter Seal through the following
      procedure:
    </t>

    <ol>
      <li>
        <t>Structural Validation:</t>
        <t>
          Verify all required fields are present and correctly typed
          per the CDDL schema.
        </t>
      </li>

      <li>
        <t>Binding MAC Verification:</t>
        <t>
          Recompute the binding-mac using the segment chain key
          and verify it matches the provided value.
        </t>
      </li>

      <li>
        <t>Entropy Commitment Verification (if raw-intervals present):</t>
        <t>
          Recompute H(intervals) and verify it matches
          entropy-commitment.
        </t>
      </li>

      <li>
        <t>Histogram Consistency (if raw-intervals present):</t>
        <t>
          Recompute histogram buckets from raw intervals and verify
          consistency with the provided summary.
        </t>
      </li>

      <li>
        <t>Entropy Threshold Check:</t>
        <t>
          Verify estimated-entropy-bits meets the minimum threshold
          for the claimed evidence tier. RECOMMENDED minimum: 32 bits
          for Standard tier, 64 bits for Enhanced tier.
        </t>
      </li>

      <li>
        <t>Sample Count Check:</t>
        <t>
          Verify sample-count is consistent with the document size
          and claimed editing duration. Anomalously low sample counts
          relative to content length indicate potential evidence gaps.
        </t>
      </li>

      <li>
        <t>Anomaly Assessment:</t>
        <t>
          If anomaly-flags are present, incorporate them into the
          overall forensic assessment. The presence of anomalies
          does not invalidate the evidence but affects confidence.
        </t>
      </li>

      <li>
        <t>VDF Entanglement Verification:</t>
        <t>
          Verify the entropy-commitment appears in the VDF input
          computation for this checkpoint.
        </t>
      </li>
    </ol>

    <t>
      The verification result contributes to the computationally-bound
      claims defined in the absence-section:
    </t>

    <ul>
      <li>
        jitter-entropy-above-threshold (claim type 8): PROVEN if
        estimated-entropy-bits exceeds threshold
      </li>
      <li>
        jitter-samples-above-count (claim type 9): PROVEN if
        sample-count exceeds threshold
      </li>
    </ul>
  </section>

  <section anchor="jitter-anomalies">
    <name>Anomaly Detection</name>

    <t>
      The Attesting Environment MAY flag anomalies in the captured
      timing data:
    </t>

    <table>
      <thead>
        <tr>
          <th>Value</th>
          <th>Flag</th>
          <th>Indication</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>1</td>
          <td>unusually-regular</td>
          <td>
            Timing distribution has lower variance than typical
            human input (coefficient of variation &lt; 0.1)
          </td>
        </tr>
        <tr>
          <td>2</td>
          <td>burst-detected</td>
          <td>
            Sustained high-speed input exceeding 200 WPM for
            &gt;30 seconds
          </td>
        </tr>
        <tr>
          <td>3</td>
          <td>gap-detected</td>
          <td>
            Significant editing gap (&gt;5 minutes) within what
            appears to be a continuous session
          </td>
        </tr>
        <tr>
          <td>4</td>
          <td>paste-heavy</td>
          <td>
            &gt;50% of content added via paste operations in this
            segment interval
          </td>
        </tr>
        <tr>
          <td>5</td>
          <td>semantic-mismatch</td>
          <td>
            Low correlation (rho &lt; 0.5) between Information Density and
            Timing Density across segment intervals
          </td>
        </tr>
      </tbody>
    </table>

    <t>
      Anomaly flags are informational and do not constitute claims
      about authorship or intent. They provide context for Verifier
      appraisal and Relying Party decision-making.
    </t>
  </section>

  <section anchor="jitter-rats-differentiation">
    <name>Relationship to RATS Evidence</name>

    <t>
      The Jitter Seal extends the RATS evidence model
      <xref target="RFC9334"/> in several ways:
    </t>

    <dl>
      <dt>Behavioral Evidence:</dt>
      <dd>
        <t>
          Traditional RATS evidence attests to system state (software
          versions, configuration, integrity). The Jitter Seal attests
          to behavioral characteristics of the input stream, capturing
          properties that emerge only during genuine interaction.
        </t>
      </dd>

      <dt>Continuous Attestation:</dt>
      <dd>
        <t>
          Unlike point-in-time attestation, Jitter Seals are
          accumulated throughout an authoring session. Each checkpoint
          adds to the behavioral evidence corpus, with earlier seals
          constraining what later seals can claim.
        </t>
      </dd>

      <dt>Non-Reproducible Evidence:</dt>
      <dd>
        <t>
          RATS evidence can typically be regenerated by returning to
          the same system state. Jitter Seal evidence cannot be
          regenerated because the timing entropy existed only at the
          moment of capture.
        </t>
      </dd>

      <dt>Epoch Marker Compatibility:</dt>
      <dd>
        <t>
          The VDF-entangled Jitter Seal can function as a local
          freshness mechanism compatible with the Epoch Markers
          framework <xref target="I-D.ietf-rats-epoch-markers"/>.
          The VDF output chain provides relative ordering; external
          anchors provide absolute time binding.
        </t>
      </dd>
    </dl>
  </section>

  <section anchor="jitter-privacy">
    <name>Privacy Considerations</name>

    <t>
      Keystroke timing data is behavioral biometric data: while not
      traditionally classified as biometric data, timing patterns
      can potentially identify individuals or reveal sensitive
      information about cognitive state or physical condition.
    </t>

    <section anchor="jitter-privacy-mitigations">
      <name>Mitigation Measures</name>

      <ul>
        <li>
          <t>Histogram Aggregation:</t>
          <t>
            By default, only aggregated histogram data is included in
            the evidence packet. Raw intervals are optional and SHOULD
            only be disclosed when enhanced verification is required.
          </t>
        </li>

        <li>
          <t>Bucket Granularity:</t>
          <t>
            The RECOMMENDED bucket boundaries (50ms minimum width)
            prevent reconstruction of exact keystroke sequences while
            preserving statistically significant patterns.
          </t>
        </li>

        <li>
          <t>No Character Mapping:</t>
          <t>
            Timing intervals are recorded without association to
            specific characters or words. The evidence captures
            rhythm without content.
          </t>
        </li>

        <li>
          <t>Session Isolation:</t>
          <t>
            Jitter data is bound to a specific evidence packet and
            segment chain. Cross-session correlation requires
            access to multiple evidence packets.
          </t>
        </li>
      </ul>
    </section>

    <section anchor="jitter-privacy-disclosure">
      <name>Disclosure Recommendations</name>

      <t>
        Implementations SHOULD inform users that:
      </t>

      <ol>
        <li>
          Typing rhythm information is captured and included in
          evidence packets
        </li>
        <li>
          Evidence packets may be shared with Verifiers and
          potentially with Relying Parties
        </li>
        <li>
          Raw timing data (if disclosed) could theoretically be
          used for behavioral analysis
        </li>
      </ol>

      <t>
        Users SHOULD have the option to:
      </t>

      <ol>
        <li>
          Disable raw-intervals disclosure (histogram-only mode)
        </li>
        <li>
          Request deletion of evidence packets after verification
        </li>
        <li>
          Review captured entropy statistics before packet
          finalization
        </li>
      </ol>
    </section>
  </section>

  <section anchor="jitter-security">
    <name>Security Considerations</name>

    <section anchor="jitter-replay-attacks">
      <name>Replay Attacks</name>

      <t>
        An adversary might attempt to replay captured jitter data
        from a previous session. This attack is mitigated by:
      </t>

      <ol>
        <li>
          <t>VDF entanglement:</t>
          <t>
            The jitter commitment is bound to the VDF chain, which
            includes the previous checkpoint hash.
          </t>
        </li>
        <li>
          <t>Chain MAC:</t>
          <t>
            The binding-mac includes the previous checkpoint hash,
            preventing transplantation.
          </t>
        </li>
        <li>
          <t>Content binding:</t>
          <t>
            The jitter data is associated with specific content
            hashes that change with each edit.
          </t>
        </li>
        <li>
          <t>Verifier nonce binding:</t>
          <t>
            When verification freshness is required, Verifiers SHOULD
            provide a 32-byte cryptographically random nonce. The Attesting
            Environment incorporates this nonce into the packet signature:
            SIG_k(H3 || verifier_nonce). This proves the evidence was
            generated in response to a specific verification request,
            preventing replay of previously generated evidence packets.
            The verifier_nonce field MUST be present when replay prevention
            is required by the verification policy.
          </t>
        </li>
      </ol>
    </section>

    <section anchor="jitter-simulation-attacks">
      <name>Simulation Attacks</name>

      <t>
        An adversary might attempt to generate synthetic timing data
        that mimics human patterns. The cost of this attack is
        bounded by:
      </t>

      <ol>
        <li>
          <t>Entropy requirement:</t>
          <t>
            Meeting the entropy threshold requires sufficient
            variation in timing. Perfectly regular synthetic input
            will fail the entropy check.
          </t>
        </li>
        <li>
          <t>Real-time constraint:</t>
          <t>
            The VDF entanglement requires that jitter data be
            captured before VDF computation. Generating synthetic
            timing that passes statistical tests while maintaining
            real-time constraints is non-trivial.
          </t>
        </li>
        <li>
          <t>Statistical consistency:</t>
          <t>
            Synthetic timing must be consistent across all
            checkpoints. Anomaly detection may flag statistically
            improbable patterns.
          </t>
        </li>
      </ol>

      <t>
        The Jitter Seal does not claim to make simulation impossible,
        only to make it costly relative to genuine interaction.
        The forgery-cost-section provides quantified bounds on
        attack costs.
      </t>
    </section>

    <section anchor="jitter-ae-trust">
      <name>Attesting Environment Trust</name>

      <t>
        The Jitter Seal relies on the Attesting Environment to
        accurately capture and report timing data. A compromised
        AE could fabricate jitter data. This is addressed by:
      </t>

      <ol>
        <li>
          Hardware binding (hardware-section) for AE integrity
        </li>
        <li>
          Calibration attestation for VDF speed verification
        </li>
        <li>
          Clear documentation of AE trust assumptions in
          absence-claim structures (ae-trust-basis field)
        </li>
      </ol>

      <t>
        Chain-verifiable claims (1-15) do not depend on AE trust
        beyond basic data integrity. Monitoring-dependent claims
        (16-63) explicitly document their AE trust requirements.
      </t>
    </section>
  </section>


    <!-- Section 4: VDF Mechanisms -->
    <section anchor="vdf-mechanisms">
      <name>Verifiable Delay Functions</name>

      <t>
        In this section, the Verifiable Delay Function (VDF) mechanisms used to establish temporal ordering and minimum elapsed time between checkpoints are specified, providing the temporal guarantees that distinguish this RATS <xref target="RFC9334"/> profile from attestation frameworks that rely solely on timestamps. Algorithm-agility is afforded by the CDDL <xref target="RFC8610"/> schema design, with both iterated hash constructions using SHA-256 <xref target="RFC6234"/> or SHA3-256 (which provide O(n) verification through recomputation) and succinct VDF schemes per Pietrzak <xref target="Pietrzak2019"/> and Wesolowski <xref target="Wesolowski2019"/> (which provide O(log n) or O(1) verification through cryptographic proofs) being supported. VDFs are functions that require a specified amount of sequential computation time to evaluate regardless of available parallelism, yet whose outputs can be verified efficiently, a property that makes them ideal for establishing unforgeable temporal ordering in the CBOR <xref target="RFC8949"/> encoded Evidence packets without reliance on RFC 3161 <xref target="RFC3161"/> timestamps or other trusted third parties.
      </t>

      <section anchor="pq-vdf">
        <name>Post-Quantum Iteration Parameters</name>
        <t>
          To provide security against quantum adversaries using Groverx27s algorithm,
          the minimum iteration count T MUST be doubled (2*T) compared to classical
          security parameters. Implementers MUST assume a quadratic speedup in
          parallel preimage search.
        </t>
      </section>

      <section anchor="vdf-construction">
        <name>VDF Construction</name>

        <t>
          Appearance of a VDF proof in each checkpoint is mandated by this specification, with the following fields encoded in CBOR per the CDDL schema. The vdf-proof structure captures the algorithm selection, parameters, input/output pairs, cryptographic proof (for succinct VDFs), and calibration attestation that enables Verifiers to assess whether the claimed-duration is plausible for the iteration count on the attested hardware.
        </t>

        <artwork type="cddl"><![CDATA[
    vdf-proof = {
        1 => vdf-algorithm,            ; algorithm
        2 => vdf-params,               ; params
        3 => bstr,                     ; input
        4 => bstr,                     ; output
        5 => bstr,                     ; proof
        6 => duration,                 ; claimed-duration
        7 => uint,                     ; iterations
        ? 8 => calibration-attestation, ; calibration (REQUIRED)
    }
    ]]></artwork>

        <section anchor="vdf-algorithms">
          <name>Algorithm Registry</name>

          <t>
            The following VDF algorithms are delineated in the algorithm registry, with algorithm selection indicated by the algorithm field in the vdf-proof structure encoded in CBOR. The iterated hash algorithms (1-2) use SHA-256 or SHA3-256 with implicit proofs (verification by recomputation), while the succinct VDF algorithms (16-19) use the constructions from Pietrzak and Wesolowski with explicit cryptographic proofs enabling efficient verification.
          </t>

          <table>
            <thead>
              <tr>
                <th>Value</th>
                <th>Algorithm</th>
                <th>Status</th>
                <th>Proof Size</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>1</td>
                <td>iterated-sha256</td>
                <td>MUST support</td>
                <td>0 (implicit)</td>
              </tr>
              <tr>
                <td>2</td>
                <td>iterated-sha3-256</td>
                <td>SHOULD support</td>
                <td>0 (implicit)</td>
              </tr>
              <tr>
                <td>16</td>
                <td>pietrzak-rsa3072</td>
                <td>MAY support</td>
                <td>~1 KB</td>
              </tr>
              <tr>
                <td>17</td>
                <td>wesolowski-rsa3072</td>
                <td>MAY support</td>
                <td>~256 bytes</td>
              </tr>
              <tr>
                <td>18</td>
                <td>pietrzak-class-group</td>
                <td>MAY support</td>
                <td>~2 KB</td>
              </tr>
              <tr>
                <td>19</td>
                <td>wesolowski-class-group</td>
                <td>MAY support</td>
                <td>~512 bytes</td>
              </tr>
            </tbody>
          </table>

          <t>
            Reservation of algorithm values 1-15 for iterated hash constructions using SHA-256 or similar hash functions is established, with values 16-31 being reserved for succinct VDF schemes based on the constructions of Pietrzak and Wesolowski. Availability of values 32+ for future allocation is maintained to accommodate advances in VDF research, with the CDDL schema extensibility ensuring forward compatibility within the RATS architecture.
          </t>
        </section>

        <section anchor="vdf-iterated-hash">
          <name>Iterated Hash Construction</name>

          <t>
            Computation by the iterated hash VDF is accomplished using repeated application of SHA-256 or SHA3-256 as the hash function, with the output of each iteration becoming the input to the next. This construction is the simplest VDF implementation, requiring only the hash function from RFC 6234 and providing inherent parallelization resistance because each iteration depends on the previous output. The mathematical definition follows:
          </t>

          <artwork><![CDATA[
    output = H^n(input)

    where H^n denotes n iterations of hash function H:
      H^0(x) = x
      H^n(x) = H(H^(n-1)(x))
    ]]></artwork>

          <t>
            Parameters for iterated hash VDFs:
          </t>

          <artwork type="cddl"><![CDATA[
    iterated-hash-params = {
        1 => hash-algorithm,    ; hash-function
        2 => uint,              ; iterations-per-second
    }
    ]]></artwork>

          <t>
            Recording of the calibrated performance of the Attesting Environment is accomplished by the iterations-per-second field in the CBOR encoded params structure, making possible assessment by Verifiers per the RATS architecture of whether the claimed-duration is plausible for the iteration count on the attested hardware. When TPM 2.0 <xref target="TPM2.0"/> or Secure Enclave attestation is available, the calibration can be hardware-signed for additional trust. The following properties are exhibited by iterated hash VDFs using SHA-256:
          </t>

          <dl>
            <dt>Verification Cost:</dt>
            <dd>
              O(n) -- Verifier must recompute all iterations.
              This is acceptable for the iteration counts typical in
              authoring scenarios (10^6 to 10^9 iterations).
            </dd>

            <dt>Parallelization Resistance:</dt>
            <dd>
              Inherently sequential. Each iteration depends on the
              previous output. No known parallelization attack.
            </dd>

            <dt>Hardware Acceleration:</dt>
            <dd>
              SHA-256 acceleration (e.g., Intel SHA Extensions, ARM
              Cryptography Extensions) provides ~3-5x speedup over
              software. This is accounted for in calibration.
            </dd>
          </dl>
        </section>

        <section anchor="vdf-succinct">
          <name>Succinct VDF Construction</name>

          <t>
            O(log n) or O(1) verification time is afforded by succinct VDFs based on the constructions of Pietrzak and Wesolowski, at the cost of larger proof sizes encoded in CBOR (approximately 1-2 KB depending on construction and security parameters) and more complex computation involving modular exponentiation. These constructions are based on repeated squaring in groups with unknown order (RSA groups or class groups), a mathematical structure that inherently resists parallelization because the order of the group is not known to the prover, unlike the iterated SHA-256 construction where the mathematical structure is simpler but verification requires O(n) recomputation.
          </t>

          <artwork type="cddl"><![CDATA[
    succinct-vdf-params = {
        10 => uint,             ; modulus-bits (minimum 3072)
        ? 11 => uint,           ; security-parameter
    }
    ]]></artwork>

          <t>
            Key set 10-19 disambiguates succinct params from iterated
            hash params (key set 1-9) without requiring a type tag.
          </t>

          <t>
            OPTIONAL status is assigned to succinct VDFs per Pietrzak and Wesolowski within this RATS profile, with their intended use being in scenarios where verification must complete in bounded time regardless of delay duration (enabling real-time verification in time-constrained environments), where CBOR encoded Evidence packets may contain very long VDF chains (millions of checkpoints accumulated over extended authoring periods), or where O(n) SHA-256 recomputation cannot be afforded by third-party Verifiers with limited computational resources. When succinct VDFs are used, the proof field in the CDDL schema contains the cryptographic proof of correct computation (approximately 256 bytes for Wesolowski or 1 KB for Pietrzak); for iterated hash VDFs using SHA-256, the proof field is empty and verification is accomplished by recomputation of the hash chain.
          </t>
        </section>
      </section>

      <section anchor="vdf-causality">
        <name>Causality Property</name>

        <t>
          Unforgeable temporal ordering is established by the VDF chain through structural causality, wherein each checkpoint's VDF output depends on the previous checkpoint's output, creating a sequence that can only be computed in order regardless of available computational resources. A key novel contribution of the Proof of Process framework within the RATS architecture is constituted by this property, which distinguishes this approach from timestamp-based ordering using RFC 3161 that relies on trusted third parties. While external anchors via RFC 3161 may be used to bind the VDF chain to absolute time, the relative ordering is established cryptographically through SHA-256 hash entanglement without any external trust assumptions.
        </t>

        <section anchor="vdf-entanglement">
          <name>Checkpoint Entanglement</name>

          <t>
            Computation of the VDF input for segment N is accomplished using SHA-256 to combine the previous VDF output, current content hash, jitter commitment (bound via HMAC), and sequence number into a single input for the VDF function, as shown in the formula below. This entanglement is encoded in CBOR per the CDDL schema and creates the causal dependencies that establish temporal ordering.
          </t>

          <artwork><![CDATA[
    VDF_input{N} = H(
        VDF_output{N-1} ||      ; Previous VDF output
        content-hash{N} ||      ; Current document state
        jitter-commitment{N} || ; Captured behavioral entropy
        sequence{N}             ; Checkpoint sequence number
    )
    ]]></artwork>

          <t>
            For the genesis checkpoint (N = 0):
          </t>

          <artwork><![CDATA[
    VDF_input{0} = H(
        session-entropy ||      ; Random 256-bit session seed
        content-hash{0} ||      ; Initial document state
        jitter-commitment{0} ||
        0x00000000              ; Sequence zero
    )
    ]]></artwork>

          <t>
            The following properties are ensured by this construction within the RATS architecture: Sequential Dependency is established such that VDF_output{N} cannot be computed without VDF_output{N-1}, making the chain inherently sequential regardless of available parallelism; Content Binding is established such that each VDF output is bound to a specific document state computed using SHA-256, with changing the content invalidating all subsequent VDF proofs in the CBOR encoded chain; Jitter Binding is established such that the behavioral entropy commitment computed via SHA-256 and bound via HMAC is entangled with the VDF, as detailed in <xref target="jitter-vdf-entanglement"/>; and Precomputation is prevented because the SHA-256 input depends on runtime values (content hash, jitter commitment) that are unknown until the checkpoint is created.
          </t>
        </section>

        <section anchor="vdf-temporal-ordering">
          <name>Temporal Ordering Without Trusted Time</name>

          <t>
            Relative temporal ordering without reliance on trusted timestamps such as RFC 3161 <xref target="RFC3161"/> is afforded by the VDF causality property, distinguishing this RATS profile from attestation schemes that require trusted time sources:
          </t>

          <dl>
            <dt>Relative Ordering:</dt>
            <dd>
              Checkpoint N necessarily occurred after segment N-1,
              because VDF_input{N} requires VDF_output{N-1}.
            </dd>

            <dt>Minimum Elapsed Time:</dt>
            <dd>
              <t>
                The time between checkpoints N-1 and N is at least:
              </t>
              <artwork><![CDATA[
    min_elapsed{N} = iterations{N} / calibration_rate
    ]]></artwork>
              <t>
                where calibration_rate is the attested iterations-per-second
                for the device.
              </t>
            </dd>

            <dt>Cumulative Time Bound:</dt>
            <dd>
              <t>
                The total minimum time to produce the evidence packet is:
              </t>
              <artwork><![CDATA[
    min_total = sum(iterations[i] / calibration_rate) for i = 0..N
    ]]></artwork>
            </dd>

            <dt>Absolute Time Binding:</dt>
            <dd>
              External anchors including RFC 3161 timestamps and blockchain proofs bind the SHA-256 segment chain to absolute time. The VDF provides the relative ordering through causality; the RFC 3161 anchors provide the epoch binding to wall-clock time.
            </dd>
          </dl>
        </section>

        <section anchor="vdf-backdating">
          <name>Backdating Resistance</name>

          <t>
            The following steps must be accomplished by an adversary attempting to backdate evidence within this RATS profile: content that produces the desired content-hash computed using SHA-256 must be generated (this is prevented by preimage resistance of SHA-256); jitter data that produces valid entropy-commitment via SHA-256 must be generated (this requires access to the original timing stream or statistical simulation); the VDF chain from the backdated checkpoint forward must be computed (this requires sequential time proportional to iterations); and all of the above must be completed before any external anchor such as RFC 3161 timestamp or blockchain proof confirms a later checkpoint (this creates a race condition that the adversary loses if anchors are obtained promptly). Linear growth with the number of subsequent checkpoints and the iteration count per checkpoint characterizes the cost of VDF recomputation, with this cost being quantified in the forgery-cost-section using concrete metrics. Crucially, parallelization of VDF computation cannot be accomplished by the adversary: inherent sequentiality is exhibited by both iterated SHA-256 constructions and the group-theoretic constructions of Pietrzak and Wesolowski. Even with unlimited computational resources, completion of each VDF must be awaited before starting the next, creating an irreducible time cost for any backdating attack that exceeds the original authoring time.
          </t>
        </section>

        <section anchor="time-binding-degradation">
          <name>Time Evidence and Degradation</name>

          <t>
            Verifiable Delay Functions provide relative temporal ordering but cannot
            independently establish absolute time. When external anchors are unavailable,
            the strength of temporal evidence degrades. This section defines explicit
            tiers that document the achievable temporal binding based on available
            anchor sources, enabling Verifiers to make informed trust decisions.
          </t>

          <section anchor="time-binding-tiers">
            <name>Time Binding Tier Definitions</name>

            <t>
              Four tiers of temporal binding are defined, based on the combination
              of external anchors available at evidence generation time:
            </t>

            <table>
              <name>Time Binding Tier Requirements</name>
              <thead>
                <tr>
                  <th>Tier</th>
                  <th>Value</th>
                  <th>Anchor Requirements</th>
                  <th>Time Binding Strength</th>
                </tr>
              </thead>
              <tbody>
                <tr>
                  <td>MAXIMUM</td>
                  <td>1</td>
                  <td>>=2 blockchain + >=2 TSA</td>
                  <td>Strong absolute time</td>
                </tr>
                <tr>
                  <td>ENHANCED</td>
                  <td>2</td>
                  <td>>=1 blockchain OR >=2 TSA</td>
                  <td>Probable absolute time</td>
                </tr>
                <tr>
                  <td>STANDARD</td>
                  <td>3</td>
                  <td>>=1 TSA</td>
                  <td>Weak absolute time</td>
                </tr>
                <tr>
                  <td>DEGRADED</td>
                  <td>4</td>
                  <td>VDF + local clock only</td>
                  <td>Relative time only</td>
                </tr>
              </tbody>
            </table>

            <t>
              The tier classification follows the principle that redundancy across
              independent anchor types provides stronger temporal assurance than
              reliance on any single source or type.
            </t>
          </section>

          <section anchor="time-tier-capabilities">
            <name>Tier Capabilities and Limitations</name>

            <dl>
              <dt>MAXIMUM Tier:</dt>
              <dd>
                <t>
                  Evidence at this tier can prove: the document existed before
                  a specific absolute time (via blockchain confirmation); the document
                  was timestamped by multiple independent authorities (via TSA tokens);
                  the relative ordering of checkpoints (via VDF); and the minimum
                  elapsed time between states (via VDF calibration).
                </t>
                <t>
                  Suitable for: litigation support, regulatory compliance, forensic
                  investigation, and contexts requiring independently verifiable
                  absolute time claims.
                </t>
              </dd>

              <dt>ENHANCED Tier:</dt>
              <dd>
                <t>
                  Evidence at this tier can prove: probable absolute time binding
                  through either blockchain proof or redundant TSA timestamps;
                  relative ordering of checkpoints; and minimum elapsed time.
                  However, the absence of cross-type redundancy introduces single-source
                  risk if the chosen anchor type is later compromised or disputed.
                </t>
                <t>
                  Suitable for: professional documentation, academic submissions, and
                  contexts where absolute time is important but cross-verification
                  requirements are moderate.
                </t>
              </dd>

              <dt>STANDARD Tier:</dt>
              <dd>
                <t>
                  Evidence at this tier can prove: absolute time binding dependent
                  on a single Time Stamping Authority; relative ordering; and minimum
                  elapsed time. The temporal claim is only as trustworthy as the
                  specific TSA, with no independent corroboration.
                </t>
                <t>
                  Suitable for: internal records, personal documentation, and contexts
                  where the TSA is trusted by all relevant parties.
                </t>
                <t>
                  LIMITATIONS: If the TSA is compromised, unavailable for verification,
                  or disputed, no independent time evidence exists. Verifiers SHOULD
                  document TSA identity in their assessment.
                </t>
              </dd>

              <dt>DEGRADED Tier:</dt>
              <dd>
                <t>
                  Evidence at this tier can ONLY prove: relative ordering of checkpoints
                  (checkpoint N necessarily occurred after checkpoint N-1); and minimum
                  elapsed time between checkpoints (via VDF and calibration). The local
                  clock timestamp is recorded but is untrusted for verification purposes.
                </t>
                <t>
                  CANNOT PROVE: absolute time of evidence creation; that the evidence
                  was not pre-computed and held before claimed timestamps; or epoch
                  binding to any external reference.
                </t>
                <t>
                  Suitable for: offline scenarios, air-gapped environments, and contexts
                  where relative ordering is sufficient. NOT suitable for contexts
                  requiring absolute time claims or adversarial verification of
                  creation time.
                </t>
              </dd>
            </dl>
          </section>

          <section anchor="time-degraded-explicit">
            <name>Explicit DEGRADED Tier Limitations</name>

            <t>
              When evidence is generated at DEGRADED tier, Attesters MUST understand
              and Verifiers MUST document the following limitations:
            </t>

            <ul>
              <li>
                The local timestamp in the evidence packet reflects the Attesting
                Environment's clock at generation time, which may be manipulated
                or misconfigured. It is NOT cryptographically bound to any external
                reference.
              </li>
              <li>
                An adversary with sufficient computational resources could generate
                evidence with a past local timestamp by computing the VDF chain forward
                from any starting point. The cost of this attack is bounded by the
                forgery-cost-bounds analysis but is not prevented by temporal binding.
              </li>
              <li>
                DEGRADED evidence provides process documentation suitable for honest
                parties but offers limited protection against adversarial backdating
                beyond the VDF computational cost.
              </li>
              <li>
                The time-evidence structure MUST contain a null value for
                absolute-time-bounds when the tier is DEGRADED, explicitly indicating
                the absence of absolute time claims.
              </li>
            </ul>

            <t>
              Attestation results for DEGRADED tier evidence SHOULD include a caveat
              explicitly stating that no absolute time claims can be verified.
            </t>
          </section>

          <section anchor="time-reanchoring">
            <name>Re-anchoring for Progressive Strengthening</name>

            <t>
              Evidence generated at a lower tier MAY be progressively strengthened
              by obtaining additional anchors after initial generation:
            </t>

            <dl>
              <dt>Post-generation Anchoring:</dt>
              <dd>
                If an evidence packet was generated at DEGRADED or STANDARD tier,
                the Attester MAY subsequently obtain additional anchors (blockchain
                proofs, TSA timestamps) that bind the packet-hash to later absolute
                times. This does NOT retroactively prove when the evidence was
                originally created, but does prove that the evidence existed before
                the anchor confirmation time.
              </dd>

              <dt>Anchor Status Tracking:</dt>
              <dd>
                The anchor-status structure documents both successful and failed
                anchor attempts. Verifiers can assess whether anchor unavailability
                was due to network conditions, service outages, or deliberate omission.
                A pattern of consistently failed anchors across multiple services may
                indicate intentional avoidance rather than legitimate unavailability.
              </dd>

              <dt>Upgrade Path:</dt>
              <dd>
                <t>
                  DEGRADED to STANDARD: Obtain one RFC 3161
                  timestamp on the packet-hash.
                </t>
                <t>
                  STANDARD to ENHANCED: Obtain either one blockchain anchor OR a
                  second TSA timestamp from an independent authority.
                </t>
                <t>
                  ENHANCED to MAXIMUM: Obtain both blockchain diversity (>=2 chains)
                  AND TSA diversity (>=2 authorities).
                </t>
              </dd>

              <dt>Time Bound Updates:</dt>
              <dd>
                When additional anchors are obtained, the absolute-time-bounds SHOULD
                be updated to reflect the tighter constraints. The earliest-possible
                timestamp is the creation time of the earliest confirmed anchor; the
                latest-possible is the creation time of the latest confirmed anchor
                before any modifications. Re-anchoring narrows the uncertainty window.
              </dd>
            </dl>
          </section>

          <section anchor="time-admissibility">
            <name>Admissibility Guidance by Tier</name>

            <t>
              Relying Parties SHOULD consider the following guidance when assessing
              temporal claims based on time binding tier:
            </t>

            <table>
              <name>Temporal Admissibility by Tier</name>
              <thead>
                <tr>
                  <th>Tier</th>
                  <th>Absolute Time Claims</th>
                  <th>Relative Time Claims</th>
                  <th>Recommended Contexts</th>
                </tr>
              </thead>
              <tbody>
                <tr>
                  <td>MAXIMUM</td>
                  <td>Strong - independently verifiable</td>
                  <td>Strong - VDF + calibration</td>
                  <td>Legal, regulatory, forensic</td>
                </tr>
                <tr>
                  <td>ENHANCED</td>
                  <td>Moderate - single-type dependency</td>
                  <td>Strong - VDF + calibration</td>
                  <td>Professional, academic</td>
                </tr>
                <tr>
                  <td>STANDARD</td>
                  <td>Weak - single-authority dependency</td>
                  <td>Strong - VDF + calibration</td>
                  <td>Internal, trusted-party</td>
                </tr>
                <tr>
                  <td>DEGRADED</td>
                  <td>None - cannot verify</td>
                  <td>Moderate - VDF + calibration</td>
                  <td>Offline, process documentation</td>
                </tr>
              </tbody>
            </table>

            <t>
              Verifiers MUST NOT make absolute time claims for DEGRADED tier evidence.
              Attestation results for DEGRADED evidence SHOULD explicitly state that
              temporal claims are limited to relative ordering and minimum elapsed time.
            </t>

            <t>
              Policy engines MAY require minimum tier thresholds for specific use cases.
              For example, a litigation support policy might require ENHANCED or MAXIMUM
              tier for temporal claims to be considered in evidence assessment.
            </t>
          </section>

          <section anchor="time-evidence-structure">
            <name>Time Evidence Structure</name>

            <t>
              The time-evidence structure captures the complete temporal binding
              assessment for an evidence packet:
            </t>

            <artwork type="cddl"><![CDATA[
    time-binding-tier = &(
        maximum: 1,     ; >=2 blockchain + >=2 TSA anchors
        enhanced: 2,    ; >=1 blockchain OR >=2 TSA anchors
        standard: 3,    ; >=1 TSA anchor
        degraded: 4,    ; VDF + local clock only
    )

    time-evidence = {
        1 => time-binding-tier,         ; tier
        2 => absolute-time-bounds / null, ; bounds (null if degraded)
        3 => relative-time-proof,       ; vdf-duration
        4 => [* anchor-status],         ; anchor-statuses
        5 => [* tstr],                  ; recommendations
    }

    absolute-time-bounds = {
        1 => pop-timestamp,             ; earliest-possible
        2 => pop-timestamp,             ; latest-possible
        3 => uint,                      ; uncertainty-seconds
        4 => uint,                      ; anchor-count
    }

    relative-time-proof = {
        1 => uint,                      ; total-vdf-iterations
        2 => uint,                      ; min-elapsed-seconds
        3 => uint,                      ; max-elapsed-seconds
        4 => uint,                      ; checkpoint-count
    }

    anchor-status = {
        1 => anchor-type,               ; type
        2 => anchor-state,              ; status
        ? 3 => tstr,                    ; reason (if unavailable/failed)
        ? 4 => pop-timestamp,           ; last-attempt
        ? 5 => tstr,                    ; anchor-id
    }

    anchor-type = &(
        bitcoin: 1,
        ethereum: 2,
        rfc3161: 3,
        drand: 4,
        opentimestamps: 5,
    )

    anchor-state = &(
        confirmed: 1,
        pending: 2,
        unavailable: 3,
        failed: 4,
        expired: 5,
    )
    ]]></artwork>

            <t>
              The recommendations field (key 5) SHOULD contain actionable guidance
              for strengthening the temporal binding. Examples include:
              "Obtain blockchain anchor within 24 hours to upgrade to ENHANCED tier",
              "Re-attempt TSA anchoring when network connectivity is restored", or
              "Current tier is MAXIMUM; no further strengthening available".
            </t>
          </section>
        </section>
      </section>

      <section anchor="vdf-calibration">
        <name>Calibration Attestation</name>

        <t>
          Calibration attestation addresses a critical verification problem within the RATS architecture: how does a Verifier know whether the claimed VDF iterations could have been computed in the claimed duration on the Attester's hardware? Without calibration, an adversary could claim a slow device while actually using fast hardware, thereby appearing to have spent more time than actually elapsed. The calibration-attestation structure encoded in CBOR per the CDDL schema addresses this by recording a hardware-attested measurement of VDF performance, optionally signed using TPM 2.0 or Secure Enclave keys via COSE.
        </t>

        <section anchor="vdf-calibration-structure">
          <name>Attestation Structure</name>

          <artwork type="cddl"><![CDATA[
    calibration-attestation = {
        1 => uint,              ; calibration-iterations
        2 => pop-timestamp,     ; calibration-time
        3 => cose-signature,    ; hw-signature
        4 => bstr,              ; device-nonce
        ? 5 => tstr,            ; device-model
        ? 6 => tstr,            ; hardware-class
    }
    ]]></artwork>

          <dl>
            <dt>calibration-iterations (key 1):</dt>
            <dd>
              The number of VDF iterations completed in a 1-second
              calibration burst at session start.
            </dd>

            <dt>calibration-time (key 2):</dt>
            <dd>
              Timestamp when calibration was performed. SHOULD be
              within 24 hours of the first checkpoint.
            </dd>

            <dt>hardware-signed attestation (key 3):</dt>
            <dd>
              COSE_Sign1 signature over the calibration data, produced
              by hardware-bound keys (Secure Enclave, TPM, etc.).
            </dd>

            <dt>device-nonce (key 4):</dt>
            <dd>
              Random 256-bit value generated at calibration time.
              Prevents replay of calibration attestations across sessions.
            </dd>

            <dt>device-model (key 5, optional):</dt>
            <dd>
              Human-readable device identifier for reference purposes.
              Not used in verification.
            </dd>

            <dt>hardware-class (key 6, optional):</dt>
            <dd>
              An identifier for the hardware security module or processor
              generation (e.g., "tpm-2.0-infineon-v1", "apple-se-m3").
              Enables Verifiers to perform plausibility checks against
              a whitelist of expected hash rates for the attested hardware.
            </dd>
          </dl>
        </section>

        <section anchor="vdf-calibration-procedure">
          <name>Calibration Procedure</name>

          <t>
            The Attesting Environment performs calibration as follows:
          </t>

          <ol>
            <li>
              <t>Generate Nonce:</t>
              <t>
                Generate a cryptographically random 256-bit device-nonce.
              </t>
            </li>

            <li>
              <t>Initialize Timer:</t>
              <t>
                Record high-resolution start time T_start.
              </t>
            </li>

            <li>
              <t>Execute Calibration Burst:</t>
              <t>
                Compute VDF iterations using the session's VDF algorithm,
                starting from H(device-nonce), until 1 second has elapsed.
              </t>
            </li>

            <li>
              <t>Record Result:</t>
              <t>
                calibration-iterations = number of iterations completed.
              </t>
            </li>

            <li>
              <t>Generate Attestation:</t>
              <t>
                Construct the attestation payload and sign with
                hardware-bound key.
              </t>
            </li>
          </ol>

          <t>
            The attestation payload for signing:
          </t>

          <artwork><![CDATA[
    attestation-payload = CBOR({
        "alg": vdf-algorithm,
        "iter": calibration-iterations,
        "nonce": device-nonce,
        "time": calibration-time
    })
    ]]></artwork>
        </section>

        <section anchor="vdf-calibration-verification">
          <name>Calibration Verification</name>

          <t>
            A Verifier validates calibration attestation as follows:
          </t>

          <ol>
            <li>
              <t>Signature Verification:</t>
              <t>
                Verify the COSE_Sign1 signature using the device's
                public key (from hardware-section or certificate chain).
              </t>
            </li>

            <li>
              <t>Nonce Uniqueness:</t>
              <t>
                Verify the device-nonce has not been seen in other
                sessions (optional, requires Verifier state).
              </t>
            </li>

            <li>
              <t>Plausibility Check:</t>
              <t>
                Verify calibration-iterations falls within expected
                range for the device class:
              </t>
              <ul>
                <li>Mobile devices: 10^5 - 10^7 iterations/second</li>
                <li>Desktop/laptop: 10^6 - 10^8 iterations/second</li>
                <li>Server-class: 10^7 - 10^9 iterations/second</li>
              </ul>
            </li>

            <li>
              <t>Consistency Check:</t>
              <t>
                For each checkpoint, verify:
              </t>
              <artwork><![CDATA[
    claimed-duration >= iterations / (calibration-iterations * tolerance)
    ]]></artwork>
              <t>
                where tolerance accounts for measurement variance
                (RECOMMENDED: 1.1, i.e., 10% margin).
              </t>
            </li>
          </ol>
        </section>

        <section anchor="vdf-calibration-trust">
          <name>Trust Model</name>

          <t>
            Calibration attestation relies on hardware-bound key integrity:
          </t>

          <ul>
            <li>
              <t>With hardware attestation:</t>
              <t>
                The calibration rate is trustworthy to the extent that
                the hardware security module is trustworthy. An adversary
                cannot claim faster-than-actual calibration without
                compromising the HSM.
              </t>
            </li>

            <li>
              <t>Without hardware attestation:</t>
              <t>
                The calibration rate is self-reported by the Attesting
                Environment. The Verifier should apply conservative
                assumptions and may require external anchors for
                time verification.
              </t>
            </li>
          </ul>

          <t>
            The hardware-section documents whether hardware attestation
            is available and which platform is used.
          </t>
        </section>
      </section>

      <section anchor="vdf-verification">
        <name>Verification Procedure</name>

        <t>
          A Verifier appraises VDF proofs through the following procedure:
        </t>

        <section anchor="vdf-verification-iterated">
          <name>Iterated Hash Verification</name>

          <t>
            For iterated hash VDFs, verification requires recomputation:
          </t>

          <ol>
            <li>
              <t>Reconstruct Input:</t>
              <t>
                Compute VDF_input{N} from the segment data using
                the entanglement formula in
                <xref target="vdf-entanglement"/>.
              </t>
            </li>

            <li>
              <t>Recompute VDF:</t>
              <t>
                Execute iterations{N} hash iterations starting from
                VDF_input{N}.
              </t>
            </li>

            <li>
              <t>Compare Output:</t>
              <t>
                Verify the computed output matches the claimed
                VDF_output{N}.
              </t>
            </li>

            <li>
              <t>Verify Duration (if calibration present):</t>
              <t>
                Apply the consistency check from
                <xref target="vdf-calibration-verification"/>.
              </t>
            </li>
          </ol>

          <t>
            For large evidence packets, Verifiers MAY use sampling
            strategies:
          </t>

          <ul>
            <li>
              Verify first and last checkpoints fully
            </li>
            <li>
              Randomly sample intermediate checkpoints
            </li>
            <li>
              Verify chain linkage (prev-hash) for all checkpoints
            </li>
          </ul>
        </section>

        <section anchor="vdf-verification-succinct">
          <name>Succinct VDF Verification</name>

          <t>
            For succinct VDFs, verification uses the cryptographic proof:
          </t>

          <ol>
            <li>
              <t>Reconstruct Input:</t>
              <t>
                Compute VDF_input{N} as above.
              </t>
            </li>

            <li>
              <t>Parse Proof:</t>
              <t>
                Decode the proof field according to the algorithm
                specification.
              </t>
            </li>

            <li>
              <t>Verify Proof:</t>
              <t>
                Execute the algorithm-specific verification procedure
                (Pietrzak or Wesolowski).
              </t>
            </li>

            <li>
              <t>Verify Duration:</t>
              <t>
                Apply calibration consistency check.
              </t>
            </li>
          </ol>
        </section>
      </section>

      <section anchor="vdf-algorithm-agility">
        <name>Algorithm Agility</name>

        <section anchor="vdf-migration">
          <name>Migration Path</name>

          <t>
            Evidence packets MAY contain checkpoints using different
            VDF algorithms. This enables migration scenarios:
          </t>

          <ul>
            <li>
              Upgrading from iterated-sha256 to iterated-sha3-256
            </li>
            <li>
              Transitioning from iterated hash to succinct VDF
            </li>
            <li>
              Adopting post-quantum secure constructions
            </li>
          </ul>

          <t>
            Algorithm changes SHOULD occur at session boundaries.
            Within a session, algorithm consistency is RECOMMENDED
            for simplicity.
          </t>
        </section>

        <section anchor="vdf-post-quantum">
          <name>Post-Quantum Considerations</name>

          <t>
            Current VDF constructions have varying post-quantum security:
          </t>

          <dl>
            <dt>Iterated Hash (SHA-256, SHA3-256):</dt>
            <dd>
              Grover's algorithm provides quadratic speedup for
              preimage attacks. This affects collision resistance
              but not the sequential computation property. The
              VDF remains secure with doubled iteration counts.
            </dd>

            <dt>RSA-based (Pietrzak, Wesolowski):</dt>
            <dd>
              Vulnerable to Shor's algorithm. Not recommended for
              long-term evidence that must remain verifiable in a
              post-quantum era.
            </dd>

            <dt>Class-group based:</dt>
            <dd>
              Based on class group computations in imaginary quadratic
              fields. Quantum security is less well understood but
              believed to be stronger than RSA.
            </dd>
          </dl>

          <t>
            For evidence intended to remain valid for decades,
            iterated hash VDFs are RECOMMENDED.
          </t>
        </section>
      </section>

      <section anchor="vdf-security">
        <name>Security Considerations</name>

        <section anchor="vdf-acceleration">
          <name>Hardware Acceleration Attacks</name>

          <t>
            An adversary with specialized hardware (ASICs, FPGAs) may
            compute VDF iterations faster than the calibrated rate.
            Mitigations:
          </t>

          <ul>
            <li>
              <t>Calibration Reflects Actual Hardware:</t>
              <t>
                Calibration is performed on the actual device, so the
                calibration rate already accounts for any acceleration
                available to the Attester.
              </t>
            </li>

            <li>
              <t>Asymmetric Advantage Limited:</t>
              <t>
                SHA-256 is widely optimized. The speedup from custom
                hardware over commodity CPUs with SHA extensions is
                typically less than 10x.
              </t>
            </li>

            <li>
              <t>Economic Analysis:</t>
              <t>
                The forgery-cost-section quantifies the cost of
                acceleration attacks in terms of hardware investment
                and time.
              </t>
            </li>
          </ul>
        </section>

        <section anchor="vdf-parallelization">
          <name>Parallelization Resistance</name>

          <t>
            VDFs are designed to resist parallelization:
          </t>

          <dl>
            <dt>Iterated Hash:</dt>
            <dd>
              Each iteration depends on the previous output. No
              parallelization is possible without breaking the hash
              function's preimage resistance.
            </dd>

            <dt>Succinct VDFs:</dt>
            <dd>
              Based on repeated squaring in groups with unknown order.
              Parallelization would require factoring the modulus
              (RSA-based) or solving the class group order problem
              (class-group based).
            </dd>
          </dl>

          <t>
            The key insight: an adversary with P processors cannot
            compute the VDF P times faster. The best known attacks
            provide negligible parallelization advantage.
          </t>
        </section>

        <section anchor="vdf-time-memory">
          <name>Time-Memory Tradeoffs</name>

          <t>
            For iterated hash VDFs, an adversary might attempt to
            precompute and store intermediate values:
          </t>

          <ul>
            <li>
              <t>Rainbow Tables:</t>
              <t>
                Precomputing H^n(x) for many x values. Mitigated by
                the unpredictable VDF input (includes content hash
                and jitter commitment).
              </t>
            </li>

            <li>
              <t>Checkpoint Tables:</t>
              <t>
                Storing every k-th intermediate value during legitimate
                computation. Enables faster recomputation from nearby
                checkpoints but does not help with backdating attacks
                (which require computing from a specific starting point).
              </t>
            </li>
          </ul>

          <t>
            No practical time-memory tradeoff significantly reduces
            the sequential computation requirement.
          </t>
        </section>

        <section anchor="vdf-calibration-attacks">
          <name>Calibration Attacks</name>

          <t>
            Attacks on the calibration system:
          </t>

          <dl>
            <dt>Throttled Calibration:</dt>
            <dd>
              <t>
                Adversary intentionally slows device during calibration
                to report lower iterations-per-second, then computes VDFs
                faster than claimed.
              </t>
              <t>
                Mitigation: Plausibility checks based on device class.
                Anomalously slow calibration for a known device model
                triggers Verifier skepticism.
              </t>
            </dd>

            <dt>Calibration Replay:</dt>
            <dd>
              <t>
                Adversary reuses calibration attestation from a slower
                device.
              </t>
              <t>
                Mitigation: Device-nonce binds calibration to session.
                Hardware signature binds to specific device key.
              </t>
            </dd>

            <dt>Device Key Compromise:</dt>
            <dd>
              <t>
                Adversary extracts hardware-bound signing key.
              </t>
              <t>
                Mitigation: Hardware security modules are designed to
                resist key extraction. This attack requires physical
                access and significant resources.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="vdf-timing-attacks">
          <name>Timing Side Channels</name>

          <t>
            VDF computation timing may leak information:
          </t>

          <ul>
            <li>
              <t>Iteration Count Inference:</t>
              <t>
                Network observers may infer iteration counts from
                checkpoint timing. This reveals only what is already
                public in the evidence packet.
              </t>
            </li>

            <li>
              <t>Content Inference:</t>
              <t>
                VDF computation time is independent of content (fixed
                iteration count per checkpoint). No content leakage
                through timing.
              </t>
            </li>
          </ul>

          <t>
            VDF implementations SHOULD use constant-time hash operations
            where available, though timing variations in VDF computation
            itself do not compromise security.
          </t>
        </section>
      </section>
  </section>


    <!-- Section 5: Absence Proofs -->
    <section anchor="absence-proofs">
      <name>Absence Proofs: Negative Evidence</name>

      <t>
        In this section, the Absence Proofs mechanism is delineated, by which
        bounded claims about what did NOT occur during document creation are made possible.
        Unlike positive evidence (proving something happened), negative evidence is afforded by absence
        proofs (proving something did not happen,
        within defined bounds and trust assumptions). This capability extends the RATS <xref target="RFC9334"/>
        evidence model with a novel class of claims particularly suited to process attestation.
      </t>

      <section anchor="absence-design-philosophy">
        <name>Design Philosophy</name>

        <t>
          A fundamental question in process
          attestation is addressed by absence proofs: can meaningful claims about events that
          did not occur be made? The answer is nuanced and depends on carefully
          articulated trust boundaries, with different claim types requiring different
          levels of trust in the Attesting Environment.
        </t>

        <section anchor="absence-value-proposition">
          <name>The Value of Bounded Claims</name>

          <t>
            Positive claims are the focus of traditional evidence systems: "X
            happened at time T." Extension to negative
            claims is afforded by absence proofs: "X did not exceed threshold Y during interval (T1, T2)."
            The value of bounded claims lies in their falsifiability, which distinguishes them from
            unbounded claims that cannot be meaningfully verified:
          </t>

          <dl>
            <dt>Positive Claim:</dt>
            <dd>
              "The author typed this document" -- difficult to verify,
              requires trust in the entire authoring environment.
            </dd>

            <dt>Bounded Negative Claim:</dt>
            <dd>
              "No single edit added more than 500 characters" -- verifiable
              directly from the segment chain without additional trust
              assumptions.
            </dd>
          </dl>

          <t>
            The burden of proof is shifted by bounded claims: instead of claiming
            what DID happen (which requires comprehensive monitoring), claims about
            what did NOT happen are made (which can be bounded by observable
            evidence derived from the segment chain). This inversion of the evidentiary
            focus makes possible meaningful claims without comprehensive surveillance.
          </t>
        </section>

        <section anchor="absence-limits">
          <name>Inherent Limits of Negative Evidence</name>

          <t>
            Fundamental limitations are exhibited by absence proofs that MUST be
            clearly communicated to Relying Parties. With respect to Monitoring Gaps,
            validity of absence claims is limited to monitored intervals, with
            gaps in monitoring creating gaps in absence guarantees. With respect to Trust Boundaries,
            trust in the Attesting Environment (AE) is required by some absence claims,
            and this trust must be explicitly documented.
          </t>
          <t>
            With respect to Threshold Semantics,
            "No paste above 500 characters" does not imply "no paste";
            claims are bounded, not absolute, and the specific thresholds must be considered
            by Relying Parties when assessing evidence. With respect to Behavioral Consistency versus Authorship,
            observable behavioral patterns are described by absence claims,
            NOT authorship, intent, or cognitive processes; consistency between declared process and
            observable evidence is documented, rather than claims about the identity or capabilities
            of the author.
          </t>
        </section>
      </section>

      <section anchor="absence-trust-boundary">
        <name>Trust Boundary: Computationally Bound vs. Monitoring-Dependent</name>

        <t>
          The critical architectural distinction in absence proofs is constituted by
          the difference between claims verifiable from the Evidence alone (trustless)
          and claims that require trust in the Attesting Environment's
          monitoring capabilities. This distinction aligns with the RATS
          separation between Evidence appraisal and Attesting Environment trust.
        </t>

        <section anchor="absence-computationally-bound-intro">
          <name>Computationally Bound Claims (1-15)</name>

          <t>
            Verification by any party with
            access to the Evidence packet is made possible for computationally-bound claims. No trust in the Attesting
            Environment is required beyond basic data integrity, with these
            claims being derived purely from the segment chain structure. Independent confirmation of these claims by a Verifier is accomplished by
            parsing the segment chain, verifying chain integrity (hashes computed using SHA-256,
            MACs, VDF linkage), computing the relevant metrics from segment data, and
            comparing against the claimed thresholds. No interaction with the Attester or trust in its monitoring
            capabilities is needed for this class of claims.
          </t>
        </section>

        <section anchor="absence-monitoring-dependent-intro">
          <name>Monitoring-Dependent Claims (16-20)</name>

          <t>
            Trust that the Attesting Environment correctly observed and reported specific events
            is required by monitoring-dependent claims.
            Verification from the segment chain alone cannot be accomplished for these claims
            because dependence on real-time monitoring of events external to the document state is exhibited.
            Assessment of the following factors must be performed by the Verifier for monitoring-dependent claims:
            whether the capability to observe the relevant events (clipboard access, process enumeration, etc.)
            was possessed by the AE, whether operation with integrity during the monitoring period was maintained by the AE,
            whether monitoring was continuous or had gaps, and
            what attestation (if any) supports the AE integrity claim.
          </t>
          <t>
            Explicit documentation of these trust assumptions is afforded by the ae-trust-basis structure,
            making possible informed Relying Party decisions. Hardware attestation via TPM <xref target="TPM2.0"/>
            or Secure Enclave may be used to strengthen the AE integrity claim, though such
            attestation is optional and its absence must be reflected in the Attestation Result caveats.
          </t>
        </section>

        <section anchor="absence-trust-comparison">
          <name>Trust Model Comparison</name>

          <table>
            <thead>
              <tr>
                <th>Aspect</th>
                <th>Computationally Bound</th>
                <th>Monitoring-Dependent</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>Verification</td>
                <td>Independent, trustless</td>
                <td>Requires AE trust</td>
              </tr>
              <tr>
                <td>Data Source</td>
                <td>Segment chain only</td>
                <td>Real-time event monitoring</td>
              </tr>
              <tr>
                <td>Confidence Basis</td>
                <td>Cryptographic proof</td>
                <td>AE integrity attestation</td>
              </tr>
              <tr>
                <td>Forgery Resistance</td>
                <td>Requires VDF recomputation</td>
                <td>Requires AE compromise</td>
              </tr>
              <tr>
                <td>Claim Types</td>
                <td>1-15</td>
                <td>16-63</td>
              </tr>
            </tbody>
          </table>
        </section>
      </section>

      <section anchor="absence-computationally-bound-claims">
        <name>Computationally Bound Claims (Types 1-15)</name>

        <t>
          Direct verification from the CBOR encoded Evidence packet without trusting the Attesting Environment's monitoring capabilities is afforded for the computationally-bound claims in the range 1-15, with verification requiring only the cryptographic primitives (SHA-256 for hash chains, HMAC for binding verification, VDF recomputation for temporal proofs) and the CDDL schema to parse the segment structures. These claims derive their truth value entirely from data present in the segment chain, with no dependency on external monitoring, making them the strongest form of evidence within the RATS architecture because they require only cryptographic verification and produce binary (PROVEN or FAILED) results.
        </t>

        <table>
          <thead>
            <tr>
              <th>Type</th>
              <th>Claim</th>
              <th>Proves</th>
              <th>Verification Method</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1</td>
              <td>max-single-delta-chars</td>
              <td>No single checkpoint added more than N characters</td>
              <td>max(delta.chars-added) across all checkpoints</td>
            </tr>
            <tr>
              <td>2</td>
              <td>max-single-delta-bytes</td>
              <td>No single checkpoint added more than N bytes</td>
              <td>Derived from char counts with encoding factor</td>
            </tr>
            <tr>
              <td>3</td>
              <td>max-net-delta-chars</td>
              <td>No single checkpoint had net change exceeding N chars</td>
              <td>max(|chars-added - chars-deleted|) per checkpoint</td>
            </tr>
            <tr>
              <td>4</td>
              <td>min-vdf-duration-seconds</td>
              <td>Total VDF time exceeds N seconds</td>
              <td>sum(claimed-duration) across checkpoints</td>
            </tr>
            <tr>
              <td>5</td>
              <td>min-vdf-duration-per-kchar</td>
              <td>At least N seconds of VDF time per 1000 characters</td>
              <td>total_vdf_seconds / (final_char_count / 1000)</td>
            </tr>
            <tr>
              <td>6</td>
              <td>checkpoint-chain-complete</td>
              <td>No gaps in segment sequence</td>
              <td>Verify sequence numbers are consecutive</td>
            </tr>
            <tr>
              <td>7</td>
              <td>checkpoint-chain-consistent</td>
              <td>All prev-hash values match prior tree-root</td>
              <td>Verify hash chain linkage</td>
            </tr>
            <tr>
              <td>8</td>
              <td>jitter-entropy-above-threshold</td>
              <td>Captured entropy exceeds N bits</td>
              <td>sum(estimated-entropy-bits) from jitter-binding</td>
            </tr>
            <tr>
              <td>9</td>
              <td>jitter-samples-above-count</td>
              <td>Jitter sample count exceeds N</td>
              <td>sum(sample-count) from jitter-summary</td>
            </tr>
            <tr>
              <td>10</td>
              <td>revision-points-above-count</td>
              <td>Document had at least N revision points</td>
              <td>Count checkpoints where chars-deleted &gt; 0</td>
            </tr>
            <tr>
              <td>11</td>
              <td>session-count-above-threshold</td>
              <td>Evidence spans at least N sessions</td>
              <td>Count distinct session boundaries in chain</td>
            </tr>
            <tr>
              <td>12</td>
              <td>cognitive-load-integrity</td>
              <td>Complexity-timing correlation exceeds threshold</td>
              <td>Spearman rank correlation between LZ complexity and jitter timing</td>
            </tr>
            <tr>
              <td>13</td>
              <td>intra-session-consistency</td>
              <td>Behavioral timing remains in stable cluster (KL Divergence &lt; delta)</td>
              <td>Statistical distance between segment Jitter Seals</td>
            </tr>
            <tr>
              <td>14</td>
              <td>complexity-timing-correlation</td>
              <td>Information Density correlates with Timing Density (rho &gt; threshold)</td>
              <td>Spearman rank correlation; segments with LZ Complexity &lt; 0.2 excluded</td>
            </tr>
            <tr>
              <td>15</td>
              <td>reserved</td>
              <td>Reserved for future computationally-bound claims</td>
              <td>N/A</td>
            </tr>
          </tbody>
        </table>

        <section anchor="absence-chain-verification-detail">
          <name>Verification Details</name>

          <t>
            For each computationally-bound claim, the Verifier performs a multi-step verification procedure that first establishes chain integrity through SHA-256 hash chain verification and VDF linkage validation, then computes the relevant metric from CBOR encoded segment data, and finally compares the observed value against the claimed threshold. The pseudocode below illustrates this procedure, with verify_chain_hashes implementing SHA-256 prev-hash verification and verify_vdf_linkage implementing VDF entanglement verification. The key property of computationally-bound claims within the RATS architecture is that verification depends ONLY on cryptographically verifiable segment data parsed according to the CDDL schema, with no dependency on external monitoring claims or trust in the Attesting Environment's reporting accuracy.
          </t>

          <artwork type="pseudocode"><![CDATA[
    verify_chain_claim(evidence, claim):
        (1) Verify chain integrity first using SHA-256
        if not verify_chain_hashes(evidence.checkpoints):
            return INVALID("Chain integrity failure")
        if not verify_vdf_linkage(evidence.checkpoints):
            return INVALID("VDF linkage failure")

        (2) Compute the metric from CBOR segment data
        observed_value = compute_metric(evidence.checkpoints, claim.type)

        (3) Compare against threshold per CDDL schema
        match claim.type:
            case MAX_SINGLE_DELTA_CHARS:
                passes = (observed_value <= claim.threshold)
            case MIN_VDF_DURATION_SECONDS:
                passes = (observed_value >= claim.threshold)

        (4) Return verification result with cryptographic proof
        if passes:
            return PROVEN(observed_value, claim.threshold)
        else:
            return FAILED(observed_value, claim.threshold)
    ]]></artwork>
        </section>
      </section>

      <section anchor="absence-monitoring-claims">
        <name>Monitoring-Dependent Claims (Types 16-63)</name>

        <t>
          The claims in the range 16-63, unlike the computationally-bound claims that depend only on SHA-256 hash verification and VDF recomputation, require trust in the Attesting Environment's monitoring capabilities as documented in the ae-trust-basis field defined in the CDDL schema. Each claim documents the specific AE capability required (clipboard monitoring, process enumeration, network traffic inspection) and the basis for trusting that capability, which may range from unverified assumptions to TPM 2.0 or Secure Enclave attestation of the AE state. Within the RATS architecture, these claims represent a weaker form of evidence than computationally-bound claims because they depend on external trust relationships, but they provide valuable evidence about events (paste operations, AI tool usage, network traffic) that cannot be derived from the segment chain alone.
        </t>

        <table>
          <thead>
            <tr>
              <th>Type</th>
              <th>Claim</th>
              <th>AE Capability Required</th>
              <th>Trust Basis</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>16</td>
              <td>max-paste-event-chars</td>
              <td>Clipboard monitoring</td>
              <td>OS-reported paste events</td>
            </tr>
            <tr>
              <td>17</td>
              <td>max-clipboard-access-chars</td>
              <td>Clipboard content access</td>
              <td>Application-level clipboard hooks</td>
            </tr>
            <tr>
              <td>18</td>
              <td>no-paste-from-ai-tool</td>
              <td>Clipboard source attribution</td>
              <td>OS process enumeration + clipboard</td>
            </tr>
            <tr>
              <td>20</td>
              <td>max-insertion-rate-wpm</td>
              <td>Real-time keystroke monitoring</td>
              <td>Input event stream timing</td>
            </tr>
            <tr>
              <td>21</td>
              <td>no-automated-input-pattern</td>
              <td>Input timing analysis</td>
              <td>Statistical pattern recognition</td>
            </tr>
            <tr>
              <td>22</td>
              <td>no-macro-replay-detected</td>
              <td>Input source verification</td>
              <td>OS input subsystem attestation</td>
            </tr>
          </tbody>
        </table>

        <section anchor="absence-ae-trust-documentation">
          <name>Trust Basis Documentation</name>

          <t>
            Each monitoring-dependent claim MUST include an ae-trust-basis structure encoded in CBOR per the CDDL schema below, documenting the trust assumptions that underlie the claim. This explicit documentation of trust requirements is essential to the RATS architecture's goal of transparent attestation, enabling Verifiers to assess claim strength based on the trust basis rather than treating all claims uniformly. When hardware attestation via TPM 2.0 or Secure Enclave is available, the ae-trust-target field references the hardware-section for cross-verification, providing cryptographically grounded trust rather than mere assumption.
          </t>

          <artwork type="cddl"><![CDATA[
    ae-trust-basis = {
        1 => ae-trust-target,   ; trust-target
        2 => tstr,              ; justification
        3 => bool,              ; verified
    }

    ae-trust-target = &(
        witnessd-software-integrity: 1,
        os-reported-events: 2,
        application-reported-events: 3,
        tpm-attested-elsewhere: 16,
        se-attested-elsewhere: 17,
        unverified-assumption: 32,
    )
    ]]></artwork>

          <dl>
            <dt>witnessd-software-integrity (1):</dt>
            <dd>
              Trust that the witnessd software itself is unmodified and
              correctly implements monitoring. Requires software attestation
              or code signing verification.
            </dd>

            <dt>os-reported-events (2):</dt>
            <dd>
              Trust that the operating system correctly reports events
              (clipboard, process list, file access). Requires OS integrity.
            </dd>

            <dt>application-reported-events (3):</dt>
            <dd>
              Trust that the authoring application correctly reports events.
              Weakest trust level; application may be compromised.
            </dd>

            <dt>tpm-attested-elsewhere (16):</dt>
            <dd>
              TPM attestation of the AE state exists in the
              hardware-section. Cross-reference for verification.
            </dd>

            <dt>se-attested-elsewhere (17):</dt>
            <dd>
              Secure Enclave attestation of the AE state exists in the
              hardware-section. Cross-reference for verification.
            </dd>

            <dt>unverified-assumption (32):</dt>
            <dd>
              The claim is based on assumptions that cannot be verified.
              Relying Party must decide whether to accept based on context.
            </dd>
          </dl>

          <t>
            The justification field provides human-readable explanation
            of why the trust basis is believed adequate. The verified
            field indicates whether the trust basis was cryptographically
            verified (true) or merely assumed (false).
          </t>
        </section>


      <section anchor="absence-monitoring-coverage">
        <name>Monitoring Coverage</name>

        <t>
          Honest documentation of monitoring gaps is essential for meaningful absence claims within the RATS architecture, and the monitoring-coverage structure defined in CDDL and encoded in CBOR captures the scope and limitations of AE monitoring. Unlike computationally-bound claims that can reference the complete segment chain verified through SHA-256 hash linkage, monitoring-dependent claims are only valid during periods when the relevant monitoring was active, making the coverage documentation critical for accurate confidence assessment. Timestamps in the monitoring-intervals array conform to RFC 3339 <xref target="RFC3339"/> format encoded using CBOR tag 1 (epoch-based date/time).
        </t>

        <artwork type="cddl"><![CDATA[
    monitoring-coverage = {
        1 => bool,                  ; keyboard-monitored
        2 => bool,                  ; clipboard-monitored
        3 => [+ time-interval],     ; monitoring-intervals
        4 => ratio-millibits,       ; coverage-fraction (0-1000 = 0.0-1.0)
        ? 5 => hardware-attestation, ; monitoring-attestation
    }

    time-interval = {
        1 => pop-timestamp,         ; start
        2 => pop-timestamp,         ; end
    }
    ]]></artwork>

        <section anchor="absence-coverage-fields">
          <name>Coverage Fields</name>

          <dl>
            <dt>keyboard-monitored (key 1):</dt>
            <dd>
              Boolean indicating whether keyboard input events were
              monitored during the session. If false, claims about
              typing patterns (20-22) cannot be made.
            </dd>

            <dt>clipboard-monitored (key 2):</dt>
            <dd>
              Boolean indicating whether clipboard operations were
              monitored. If false, claims about paste events (16-18)
              cannot be made.
            </dd>

            <dt>monitoring-intervals (key 3):</dt>
            <dd>
              Array of time intervals during which monitoring was active.
              Gaps between intervals represent periods where monitoring
              was suspended (application backgrounded, system sleep, etc.).
            </dd>

            <dt>coverage-fraction (key 4):</dt>
            <dd>
              Fraction of total session time covered by monitoring,
              calculated as sum(interval_duration) / total_session_duration.
              Values below 0.95 indicate significant monitoring gaps
              that may affect absence claim confidence.
            </dd>

            <dt>monitoring-attestation (key 5, optional):</dt>
            <dd>
              Hardware attestation that monitoring was active during
              the claimed intervals. Provides stronger assurance than
              self-reported coverage.
            </dd>
          </dl>
        </section>

        <section anchor="absence-gap-semantics">
          <name>Gap Semantics</name>

          <t>
            Monitoring gaps have explicit semantic impact on absence claims:
          </t>

          <ul>
            <li>
              <t>Covered Intervals:</t>
              <t>
                Absence claims apply fully during covered intervals.
                "No paste above 500 chars during (T1, T2)" means the
                AE would have detected any such paste.
              </t>
            </li>

            <li>
              <t>Gap Intervals:</t>
              <t>
                During gaps, monitoring-dependent claims cannot be made.
                An event could have occurred unobserved.
              </t>
            </li>

            <li>
              <t>Gap-Aware Claims:</t>
              <t>
                If coverage-fraction is below 1.0, absence claims SHOULD
                include a caveat noting the monitoring gap percentage.
              </t>
            </li>
          </ul>

          <t>
            Chain-verifiable claims (1-15) are NOT affected by monitoring
            gaps because they are derived from the segment chain, which
            has no gaps (checkpoint-chain-complete verifies this).
          </t>
        </section>
      </section>
      </section>

      <section anchor="absence-structure">
        <name>Absence Section Structure</name>

        <t>
          The absence-section appears as an optional field (key 15) in the evidence-packet structure defined in CDDL and encoded in CBOR, contributing to the Maximum evidence tier when present. The structure contains the monitoring-coverage documentation, an array of absence-claim structures each with explicit confidence levels and trust basis documentation per the RATS transparency requirements, and an optional claim-summary that enables quick assessment of how many claims are computationally-bound (provable from SHA-256 hash chains and VDF proofs alone) versus monitoring-dependent (requiring AE trust).
        </t>

        <artwork type="cddl"><![CDATA[
    absence-section = {
        1 => monitoring-coverage,     ; monitoring-coverage
        2 => [+ absence-claim],       ; claims
        ? 3 => claim-summary,         ; claim-summary
    }

    claim-summary = {
        1 => uint,                    ; computationally-bound-count
        2 => uint,                    ; monitoring-dependent-count
        3 => bool,                    ; all-claims-attested
    }

    absence-claim = {
        1 => absence-claim-type,      ; claim-type
        2 => absence-threshold,       ; threshold
        3 => absence-proof,           ; proof
        4 => absence-confidence,      ; confidence
        ? 5 => ae-trust-basis,        ; ae-trust-basis (monitoring)
    }

    absence-threshold = {
        1 => uint / null,             ; value (millibits or count, type-dependent)
    }

    absence-proof = {
        1 => absence-proof-method,    ; proof-method
        2 => absence-evidence,        ; evidence
    }

    absence-proof-method = &(
        checkpoint-chain-analysis: 1,
        keystroke-analysis: 2,
        platform-attestation: 3,
        network-attestation: 4,
        statistical-inference: 5,
    )

    absence-evidence = {
        ? 1 => [uint, uint],          ; checkpoint-range
        ? 2 => uint,                  ; max-observed-value
        ? 3 => uint,                  ; max-observed-rate-per-min (integer)
        ? 4 => tstr,                  ; statistical-test
        ? 5 => p-value-centibits,     ; p-value (0-10000 = 0.0000-1.0000)
        ? 6 => bstr,                  ; attestation-ref
    }

    absence-confidence = {
        1 => confidence-level,        ; level
        2 => [* tstr],                ; caveats
    }

    confidence-level = &(
        proven: 1,
        high: 2,
        medium: 3,
        low: 4,
    )
    ]]></artwork>

        <section anchor="absence-confidence-levels">
          <name>Confidence Levels</name>

          <dl>
            <dt>proven (1):</dt>
            <dd>
              The claim is cryptographically provable from the Evidence.
              Only computationally-bound claims (1-15) can achieve this level.
            </dd>

            <dt>high (2):</dt>
            <dd>
              Strong evidence supports the claim. For monitoring-dependent
              claims, requires hardware attestation of AE integrity and
              high monitoring coverage (&gt;95%).
            </dd>

            <dt>medium (3):</dt>
            <dd>
              Reasonable evidence supports the claim. AE integrity is
              assumed but not hardware-attested. Monitoring coverage
              is acceptable (&gt;80%).
            </dd>

            <dt>low (4):</dt>
            <dd>
              Weak evidence supports the claim. Significant caveats apply.
              Monitoring gaps exist or AE trust basis is unverified.
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="absence-verification-procedure">
        <name>Verification Procedure</name>

        <t>
          A Verifier appraises absence claims through a structured
          procedure that distinguishes computationally-bound from
          monitoring-dependent claims:
        </t>

        <section anchor="absence-verify-chain">
          <name>Step 1: Verify Computationally Bound Claims</name>

          <t>
            For claims with type 1-15:
          </t>

          <ol>
            <li>
              <t>Verify Evidence Integrity:</t>
              <t>
                Verify segment chain hashes, VDF linkage, and
                structural validity per the base protocol.
              </t>
            </li>

            <li>
              <t>Extract Metrics:</t>
              <t>
                Compute the relevant metric from segment data
                (e.g., max delta chars, total VDF duration).
              </t>
            </li>

            <li>
              <t>Compare Threshold:</t>
              <t>
                Verify the computed metric satisfies the claimed threshold.
              </t>
            </li>

            <li>
              <t>Assign Confidence:</t>
              <t>
                Chain-verifiable claims that pass receive confidence
                level "proven" (1).
              </t>
            </li>
          </ol>
        </section>

        <section anchor="absence-verify-monitoring">
          <name>Step 2: Appraise Monitoring-Dependent Claims</name>

          <t>
            For claims with type 16-63:
          </t>

          <ol>
            <li>
              <t>Assess AE Trust Basis:</t>
              <t>
                Examine the ae-trust-basis for each claim. Determine
                whether the trust target is appropriate for the claim
                type and whether it was verified.
              </t>
            </li>

            <li>
              <t>Evaluate Monitoring Coverage:</t>
              <t>
                Check monitoring-coverage to determine whether the
                relevant monitoring was active. Verify coverage-fraction
                is adequate for the confidence level claimed.
              </t>
            </li>

            <li>
              <t>Cross-Reference Hardware Attestation:</t>
              <t>
                If ae-trust-target is tpm-attested-elsewhere (16) or
                se-attested-elsewhere (17), verify the corresponding
                attestation exists in hardware-section.
              </t>
            </li>

            <li>
              <t>Evaluate Evidence:</t>
              <t>
                Examine the absence-evidence for supporting data.
                Statistical tests should have appropriate p-values;
                attestation references should be verifiable.
              </t>
            </li>

            <li>
              <t>Assign Confidence:</t>
              <t>
                Based on the above factors, assign confidence level
                (2-4). Level 1 (proven) is NOT available for
                monitoring-dependent claims.
              </t>
            </li>

            <li>
              <t>Document Caveats:</t>
              <t>
                Record any limitations or assumptions in the caveats
                array of the verification result.
              </t>
            </li>
          </ol>
        </section>

        <section anchor="absence-verify-summary">
          <name>Step 3: Produce Verification Summary</name>

          <t>
            The Verifier produces a result-claim for each absence-claim
            examined:
          </t>

          <artwork type="cddl"><![CDATA[
result-claim = {
        1 => uint,                      ; claim-type
        2 => bool,                      ; verified
        ? 3 => tstr,                    ; detail
        ? 4 => confidence-level,        ; claim-confidence
}
]]></artwork>
        </section>

      <section anchor="absence-rats-mapping">
        <name>RATS Architecture Mapping</name>

        <t>
          Absence proofs extend the RATS (Remote ATtestation procedureS)
          evidence model in several ways:
        </t>

        <section anchor="absence-rats-roles">
          <name>Role Distribution</name>

          <dl>
            <dt>Attester Responsibility:</dt>
            <dd>
              <t>
                The Attester (witnessd AE) generates absence claims
                based on its monitoring observations. For computationally-bound
                claims, the Attester merely assembles segment data
                in a format that enables Verifier computation. For
                monitoring-dependent claims, the Attester makes assertions
                about events it observed (or did not observe).
              </t>
            </dd>

            <dt>Verifier Responsibility:</dt>
            <dd>
              <t>
                The Verifier independently verifies computationally-bound
                claims by recomputing metrics from Evidence. For
                monitoring-dependent claims, the Verifier appraises
                the trust basis and determines whether to accept the
                Attester's monitoring assertions.
              </t>
            </dd>

            <dt>Relying Party Responsibility:</dt>
            <dd>
              <t>
                The Relying Party consumes the attestation-result
                (.war file) and decides whether the verified claims
                meet their requirements. Different use cases may
                require different confidence levels or claim types.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="absence-rats-evidence-extension">
          <name>Evidence Model Extension</name>

          <t>
            Standard RATS evidence attests to system state (software
            versions, configuration). Absence proofs add a new category:
          </t>

          <dl>
            <dt>State Evidence (traditional RATS):</dt>
            <dd>
              "The system was in configuration C at time T."
            </dd>

            <dt>Behavioral Consistency Evidence (absence proofs):</dt>
            <dd>
              "Observable behavior during interval (T1, T2) was consistent
              with constraint X."
            </dd>
          </dl>

          <t>
            This extension enables attestation about processes, not just
            states. The segment chain provides the evidentiary basis
            for process claims that would otherwise require continuous
            trusted monitoring.
          </t>
        </section>

        <section anchor="absence-rats-appraisal-policy">
          <name>Appraisal Policy Integration</name>

          <t>
            Verifiers MAY define appraisal policies that specify:
          </t>

          <ul>
            <li>
              Which absence claim types are required for acceptance
            </li>
            <li>
              Minimum confidence levels for each claim type
            </li>
            <li>
              Required trust basis for monitoring-dependent claims
            </li>
            <li>
              Minimum monitoring coverage thresholds
            </li>
          </ul>

          <t>
            Example policy (informative):
          </t>

          <artwork><![CDATA[
    policy:
      required_claims:
        - type: 1   # max-single-delta-chars
          threshold: 500
          min_confidence: proven
        - type: 4   # min-vdf-duration-seconds
          threshold: 3600
          min_confidence: proven
        - type: 16  # max-paste-event-chars
          threshold: 200
          min_confidence: high
          required_trust_basis: (1, 16, 17)  (SE or TPM attested)
      min_monitoring_coverage: 0.95
    ]]></artwork>
        </section>
      </section>

      <section anchor="absence-security">
        <name>Security Considerations</name>

        <section anchor="absence-security-limits">
          <name>What Absence Claims Do NOT Prove</name>

          <t>
            Absence claims have explicit limits that MUST be understood
            by all parties:
          </t>

          <dl>
            <dt>Absence claims do NOT prove authorship:</dt>
            <dd>
              "No single edit added more than 500 characters" does not
              prove who performed the edits. It proves only that the
              observable edit pattern had this property.
            </dd>

            <dt>Absence claims do NOT prove intent:</dt>
            <dd>
              "No paste from AI tool detected" does not prove the author
              intended to write without AI assistance. The author may
              have used AI tools in ways that evade detection.
            </dd>

            <dt>Absence claims do NOT prove cognitive process:</dt>
            <dd>
              Behavioral patterns consistent with human typing do not
              prove human cognition produced the content. The claims
              describe observable behavior, not mental states.
            </dd>

            <dt>Absence claims do NOT prove completeness:</dt>
            <dd>
              Claims apply only to monitored intervals. Events during
              monitoring gaps are not covered by absence claims.
            </dd>
          </dl>

          <t>
            Framing claims as "behavioral consistency" rather than
            "human authorship" avoids overclaiming and maintains
            intellectual honesty about what the evidence actually shows.
          </t>
        </section>

        <section anchor="absence-security-ae-compromise">
          <name>Attesting Environment Compromise</name>

          <t>
            Monitoring-dependent claims are only as trustworthy as the
            Attesting Environment:
          </t>

          <ul>
            <li>
              <t>Software Compromise:</t>
              <t>
                Modified witnessd software could fabricate monitoring
                observations. Mitigated by code signing and software
                attestation.
              </t>
            </li>

            <li>
              <t>OS Compromise:</t>
              <t>
                Compromised OS could report false clipboard contents
                or process lists. Mitigated by hardware attestation
                of OS integrity.
              </t>
            </li>

            <li>
              <t>Hardware Compromise:</t>
              <t>
                Physical access to device could enable hardware-level
                attacks. This is outside the threat model for most
                use cases.
              </t>
            </li>
          </ul>

          <t>
            The ae-trust-basis structure explicitly documents which
            trust assumptions apply, enabling Relying Parties to make
            informed decisions about acceptable risk.
          </t>
        </section>

        <section anchor="absence-security-evasion">
          <name>Monitoring Evasion</name>

          <t>
            Sophisticated adversaries may attempt to evade monitoring:
          </t>

          <dl>
            <dt>Timing-Based Evasion:</dt>
            <dd>
              Performing prohibited actions during monitoring gaps.
              Mitigated by high coverage requirements and gap
              documentation.
            </dd>

            <dt>Tool-Based Evasion:</dt>
            <dd>
              Using tools not in the detection list (e.g., novel
              to known tools; unknown tools may evade detection.
            </dd>

            <dt>Channel-Based Evasion:</dt>
            <dd>
              Using alternative input channels (screen readers,
              accessibility features) not monitored by the AE.
              Mitigated by comprehensive input monitoring.
            </dd>

            <dt>Simulation:</dt>
            <dd>
              Generating input patterns that mimic human behavior.
              The jitter-seal and VDF mechanisms make this costly
              but not impossible. See forgery-cost-section.
            </dd>
          </dl>

          <t>
            Absence proofs do not claim to make evasion impossible,
            only to make it costly and to document the monitoring
            coverage that was actually achieved.
          </t>
        </section>

        <section anchor="absence-security-statistical">
          <name>Statistical Claim Limitations</name>

          <t>
            Claims based on statistical inference (proof-method 5)
            have inherent uncertainty:
          </t>

          <ul>
            <li>
              p-values indicate probability, not certainty
            </li>
            <li>
              Multiple testing increases false positive risk
            </li>
            <li>
              Adversarial inputs may exploit statistical assumptions
            </li>
          </ul>

          <t>
            Statistical claims SHOULD be assigned confidence level
            "medium" (3) or "low" (4) unless supported by additional
            evidence.
          </t>
        </section>
      </section>

      <section anchor="absence-privacy">
        <name>Privacy Considerations</name>

        <t>
          Absence claims may reveal information about the authoring
          process:
        </t>

        <ul>
          <li>
            <t>Edit Pattern Disclosure:</t>
            <t>
              Chain-verifiable claims reveal aggregate statistics about
              edit sizes and frequencies. This is inherent in the
              segment chain and cannot be hidden without removing
              the evidentiary basis for claims.
            </t>
          </li>

          <li>
            <t>Tool Usage Disclosure:</t>
            <t>
              that the AE was monitoring for AI tool usage. Users
              should be informed of this monitoring.
            </t>
          </li>

          <li>
            <t>Behavioral Fingerprinting:</t>
            <t>
              Detailed jitter data and monitoring observations could
              theoretically enable behavioral fingerprinting. The
              histogram aggregation in jitter-binding mitigates this
              for timing data.
            </t>
          </li>
        </ul>

        <t>
          Users SHOULD be informed which absence claims will be
          generated and have the option to disable specific monitoring
          capabilities if privacy concerns outweigh the value of
          those claims.
        </t>
      </section>
      </section>
    </section>


    <!-- Section 6: Forgery Cost Bounds -->
    <section anchor="forgery-cost-bounds"
            >
      <name>Forgery Cost Bounds (Quantified Security)</name>

      <t>
        This section defines the forgery cost bounds mechanism, which
        provides quantified security analysis for Proof of Process evidence.
        Rather than claiming evidence is "secure" or "insecure" in absolute
        terms, this framework expresses security as minimum resource costs
        that an adversary must expend to produce counterfeit evidence.
      </t>

      <section anchor="fcb-design-philosophy">
        <name>Design Philosophy</name>

        <t>
          Traditional security claims are often binary: a system is either
          "secure" or "broken." This framing poorly serves attestation
          scenarios where:
        </t>

        <ul>
          <li>
            Adversary capabilities vary across resource levels
          </li>
          <li>
            Evidence value degrades gracefully rather than failing
            completely
          </li>
          <li>
            Relying Parties have different risk tolerances
          </li>
          <li>
            Hardware costs and computational speeds change over time
          </li>
        </ul>

        <t>
          The Proof of Process framework adopts quantified security:
          expressing security guarantees in terms of measurable costs
          (time, entropy, economic resources) that bound adversary
          capabilities.
        </t>

      <section anchor="fcb-cost-asymmetry">
        <name>Quantified Forgery Cost Bounds</name>
        <t>
          Forgery requires simulating the D_i &lt;-&gt; tau_i alignment during sequential VDF
          computation. This imposes a computational cost of O(n * VDF_iters), where
          n is the number of segments. Achieving psycholinguistic fidelity requires
          high-latency semantic processing synchronized with the VDF chain. Simulating the Error Topology (H=0.7, rho=0.8) within the sequential VDF phases requires approximately 10^3 trials per segment using a biological motor-skill model, further increasing the search space for forgery.
        </t>
        <t>
          Bound: A 1-hour human authoring session generates approximately 10^12
          hardware cycles (@ 4GHz). A bot must expend equivalent sequential cycles
          without the benefit of parallelism to produce a valid correlation proof.
        </t>
      </section>


        <section anchor="fcb-non-claims">
          <name>What Forgery Cost Bounds Do NOT Claim</name>

          <t>
            Forgery cost bounds explicitly avoid claims that evidence is:
          </t>

          <ul>
            <li>
              <strong>Unforgeable:</strong> Given sufficient resources, any
              evidence can be forged. The bounds quantify "sufficient."
            </li>
            <li>
              <strong>Guaranteed authentic:</strong> Bounds express minimum
              forgery costs, not maximum. Cheaper attacks may exist that
              have not been discovered.
            </li>
            <li>
              <strong>Irrefutable proof:</strong> Evidence supports claims
              with quantified confidence, not mathematical certainty.
            </li>
            <li>
              <strong>Permanent:</strong> Cost bounds depreciate as hardware
              improves. Evidence verified today may have different bounds
              when re-evaluated in the future.
            </li>
          </ul>
        </section>
      </section>

      <section anchor="fcb-structure">
        <name>Forgery Cost Section Structure</name>

        <t>
          The forgery-cost-section appears in each evidence packet and
          contains four required components:
        </t>

        <artwork type="cddl"><![CDATA[
    forgery-cost-section = {
        1 => time-bound,           ; time-bound
        2 => entropy-bound,        ; entropy-bound
        3 => economic-bound,       ; economic-bound
        4 => security-statement,   ; security-statement
    }
    ]]></artwork>

        <t>
          These components represent orthogonal dimensions of forgery cost.
          A complete security assessment considers all four dimensions.
        </t>
      </section>

      <section anchor="fcb-time-bound">
        <name>Time Bound</name>

        <t>
          The time-bound quantifies the minimum wall-clock time required to recompute the VDF chain, establishing a lower bound on forgery duration that an adversary must necessarily expend regardless of computational resources available, with the bound being enforced through the inherent sequentiality of VDF constructions where each iteration depends cryptographically on the previous output computed via SHA-256 or similar hash functions, ensuring that even adversaries with parallel processing capabilities cannot reduce the wall-clock time required to forge evidence chains within the RATS architecture.
        </t>

        <artwork type="cddl"><![CDATA[
    time-bound = {
        1 => uint,                 ; total-iterations
        2 => uint,                 ; calibration-rate
        3 => tstr,                 ; reference-hardware
        4 => uint,                 ; min-recompute-seconds (integer seconds)
        5 => bool,                 ; parallelizable
        ? 6 => uint,               ; max-parallelism
    }
    ]]></artwork>

        <section anchor="fcb-time-fields">
          <name>Field Definitions</name>

          <t>
            The time-bound structure, encoded in CBOR according to the CDDL schema above, contains six fields that together quantify the temporal cost of forgery within the RATS evidence framework. The total-iterations field (key 1) represents the sum of all VDF iterations across all checkpoints in the evidence packet, computed as sum(checkpoint{i}.vdf-proof.iterations) for all i, providing the raw count of sequential hash operations using SHA-256 that must be recomputed. The calibration-rate field (key 2) contains the attested iterations-per-second from the calibration attestation, representing the maximum VDF computation speed on the Attesting Environment's hardware as measured through TPM 2.0 or similar hardware attestation mechanisms. The reference-hardware field (key 3) provides a human-readable description of the hardware used for calibration (e.g., "Apple M2 Pro", "Intel i9-13900K"), used for plausibility assessment rather than cryptographic verification. The min-recompute-seconds field (key 4) specifies the minimum wall-clock seconds required to recompute the VDF chain on reference hardware, calculated as total-iterations divided by calibration-rate, representing a lower bound since actual recomputation on slower hardware takes longer. The parallelizable field (key 5) is a boolean indicating whether the VDF algorithm permits parallelization, with iterated hash VDFs using SHA-256 (algorithms 1-15) always reporting false due to inherent sequentiality, while certain succinct VDF constructions may permit limited parallelization. The optional max-parallelism field (key 6) specifies the maximum parallel speedup factor when parallelizable is true, remaining absent for iterated hash VDFs that enforce strict sequential computation.
          </t>
        </section>

        <section anchor="fcb-time-verification">
          <name>Time Bound Verification</name>

          <t>
            A Verifier within the RATS architecture computes and validates the time bound through a systematic procedure that ensures the claimed temporal costs are mathematically consistent with the VDF proofs embedded in the evidence chain. First, the Verifier traverses all checkpoints encoded in CBOR and sums the iterations field from each VDF proof, accumulating the total sequential hash operations using SHA-256 that comprise the evidence chain. Second, if calibration attestation is present (typically signed via COSE and backed by TPM 2.0 hardware attestation), the Verifier validates the hardware signature and checks that calibration-rate matches the attested iterations-per-second from the trusted hardware module. Third, the Verifier computes the minimum time by dividing total-iterations by calibration-rate and verifies that the result matches min-recompute-seconds within floating-point tolerance, confirming mathematical consistency of the VDF chain. Fourth, the Verifier performs a plausibility check to ensure min-recompute-seconds is consistent with the claimed authoring duration indicated by RFC 3339 timestamps, since significant discrepancy (e.g., a 10-hour claimed session with only 1-minute VDF time) indicates either misconfiguration of the Attesting Environment or potential manipulation of the evidence packet.
          </t>
        </section>

        <section anchor="fcb-time-parallelization">
          <name>Parallelization Resistance</name>

          <t>
            The security of time bounds within the RATS architecture depends critically on VDF parallelization resistance as established in the cryptographic literature, which provides formal proofs that sequential computation cannot be accelerated through parallel hardware deployment. For iterated hash VDFs using SHA-256, each iteration depends cryptographically on the previous output through the hash function's one-way property, no known technique computes H^n(x) faster than n sequential hash operations due to the preimage resistance of SHA-256, and an adversary with P processors fundamentally cannot compute the chain P times faster because the inherent data dependency between iterations prevents parallelization. This property ensures that time bounds reflect wall-clock time rather than aggregate compute time, meaning an adversary with access to an entire data center cannot forge 10 hours of evidence in 10 minutes by deploying 60x more processors, since the sequential VDF chain must still be computed one iteration at a time regardless of available parallel resources. See <xref target="vdf-parallelization"/> for detailed analysis of parallelization resistance in each VDF algorithm supported by this RATS profile.
          </t>
        </section>
      </section>

      <section anchor="fcb-entropy-bound">
        <name>Entropy Bound</name>

        <t>
          The entropy-bound quantifies the unpredictability in the evidence chain as captured through the Jitter Seal behavioral entropy mechanism, establishing a lower bound on the probability of guessing or replaying entropy commitments that are bound to the VDF chain through HMAC-SHA256 <xref target="RFC2104"/> commitments. Within the RATS architecture, this entropy bound represents the accumulated behavioral randomness from human input patterns that an adversary would need to predict or reproduce in order to forge authentic-appearing evidence, with the CBOR encoding following the CDDL schema specified below.
        </t>

        <artwork type="cddl"><![CDATA[
    entropy-bound = {
        1 => entropy-decibits,     ; total-entropy (decibits, /10 for bits)
        2 => uint,                 ; sample-count
        3 => entropy-decibits,     ; entropy-per-sample (decibits)
        4 => uint,                 ; brute-force-log2 (negative exponent, e.g., 64 = 2^-64)
        5 => bool,                 ; replay-possible
        ? 6 => tstr,               ; replay-prevention
    }
    ]]></artwork>

        <section anchor="fcb-entropy-fields">
          <name>Field Definitions</name>

          <t>
            The entropy-bound structure, encoded in CBOR according to the CDDL schema, contains six fields that together quantify the unpredictability barrier facing an adversary attempting to forge evidence within the RATS framework. The total-entropy-bits field (key 1) represents the aggregate entropy across all Jitter Seals in the evidence packet expressed in bits, computed as sum(jitter-summary[i].estimated-entropy-bits) for all i where each Jitter Seal captures behavioral timing bound via HMAC-SHA256. The sample-count field (key 2) contains the total number of timing samples captured across all Jitter Seals, with higher sample counts increasing confidence in the Min-Entropy (H_min) estimate derived from the timing histogram. The entropy-per-sample field (key 3) represents the average entropy contribution per timing sample calculated as total-entropy-bits divided by sample-count, with typical human typing contributing 2-4 bits per inter-key interval based on motor timing variance. The brute-force-probability field (key 4) quantifies the probability of successfully guessing the entropy commitment by brute force, calculated as 2^(-total-entropy-bits), yielding approximately 5.4 x 10^-20 for 64 bits of entropy. The replay-possible field (key 5) is a boolean indicating whether Jitter Seal replay is theoretically possible, set to false when VDF entanglement is properly configured such that the HMAC entropy commitment appears in the VDF input chain. The optional replay-prevention field (key 6) provides a human-readable description of replay prevention mechanisms, typically containing values such as "VDF entanglement with prev-checkpoint binding using SHA-256".
          </t>
        </section>

        <section anchor="fcb-entropy-verification">
          <name>Entropy Bound Verification</name>

          <t>
            A Verifier within the RATS architecture computes and validates the entropy bound through a systematic five-step procedure that ensures the claimed entropy costs are mathematically consistent with the Jitter Seal commitments embedded in the CBOR evidence chain. First, the Verifier aggregates entropy by summing estimated-entropy-bits from each checkpoint's jitter-summary and verifying that the total matches the claimed total-entropy-bits field, ensuring no entropy claims have been inflated beyond what the underlying HMAC-SHA256 commitments support. Second, the Verifier counts samples by summing sample-count from each jitter-summary and verifying consistency with the claimed sample-count, confirming the behavioral timing data volume matches expectations for the authoring session duration indicated by RFC 3339 timestamps. Third, if raw-intervals are disclosed for transparency, the Verifier recomputes the histogram and Min-Entropy (H_min) independently, verifying consistency with the claimed entropy estimate to detect potential manipulation of entropy calculations. Fourth, the Verifier checks replay prevention by verifying that each HMAC entropy-commitment appears in the corresponding VDF input per the VDF chain construction, setting replay-possible to true if VDF entanglement is absent since unentangled entropy commitments could theoretically be replayed from previous sessions. Fifth, the Verifier computes brute-force probability by calculating 2^(-total-entropy-bits) and verifying that the result matches the claimed brute-force-probability within floating-point tolerance, confirming the security bound is mathematically accurate for the accumulated behavioral entropy.
          </t>
        </section>

        <section anchor="fcb-entropy-requirements">
          <name>Minimum Entropy Requirements</name>

          <t>
            The RATS profile defined in this specification establishes RECOMMENDED minimum entropy thresholds by evidence tier, with thresholds calibrated to provide meaningful security guarantees against brute-force attacks on the HMAC-SHA256 entropy commitments embedded in Jitter Seals. The Basic tier requires a minimum of 32 bits of total entropy, corresponding to a brute-force probability less than 2.3 x 10^-10, suitable for low-stakes evidence where moderate forgery resistance suffices. The Standard tier requires a minimum of 64 bits of total entropy, corresponding to a brute-force probability less than 5.4 x 10^-20, providing strong forgery resistance appropriate for most professional and academic authorship attestation use cases. The Enhanced tier requires a minimum of 128 bits of total entropy, corresponding to a brute-force probability less than 2.9 x 10^-39, offering cryptographically strong guarantees approaching the security level of the underlying SHA-256 hash function for high-stakes evidence requiring maximum assurance. Evidence packets encoded in CBOR that fail to meet the minimum entropy thresholds for their claimed tier SHOULD be flagged in the security-statement caveats, enabling Relying Parties to make informed trust decisions within the RATS architecture about whether the behavioral entropy is sufficient for their risk tolerance.
          </t>
        </section>
      </section>

      <section anchor="fcb-economic-bound">
        <name>Economic Bound</name>

        <t>
          The economic-bound translates time requirements derived from VDF chains and entropy requirements captured through HMAC-SHA256 Jitter Seals into monetary costs, enabling Relying Parties within the RATS architecture to assess forgery feasibility in concrete economic terms that can be compared against the potential value of forgery. The CBOR encoding follows the CDDL schema specified below, providing a standardized representation of cost estimates that can be independently verified and compared across different evidence packets.
        </t>

        <artwork type="cddl"><![CDATA[
    economic-bound = {
        1 => tstr,                 ; cost-model-version
        7 => tstr,                 ; oracle-uri (Signed Pricing Feed)

        2 => pop-timestamp,        ; cost-model-date
        3 => cost-estimate,        ; compute-cost
        4 => cost-estimate,        ; time-cost
        5 => cost-estimate,        ; total-min-cost
        6 => cost-estimate,        ; cost-per-hour-claimed
    }

    cost-estimate = {
        1 => cost-microdollars,    ; usd (microdollars, /1000000 for USD)
        2 => cost-microdollars,    ; usd-low
        3 => cost-microdollars,    ; usd-high
        4 => tstr,                 ; basis
    }
    ]]></artwork>

        <section anchor="fcb-economic-fields">
          <name>Field Definitions</name>

          <t>
            The economic-bound structure, encoded in CBOR according to the CDDL schema, contains six fields that translate the cryptographic and temporal costs of VDF chain recomputation into monetary terms for Relying Party assessment within the RATS framework. The cost-model-version field (key 1) contains an identifier for the cost model used (e.g., "witnessd-cost-2025Q1"), with versioning necessary because hardware prices and computational costs for SHA-256 operations change over time. The cost-model-date field (key 2) contains an RFC 3339 timestamp when the cost model was established. The compute-cost field (key 3) quantifies the cost of computational resources required to recompute the VDF chain, including cloud compute instance cost for min-recompute-seconds of sequential SHA-256 operations, electricity cost for sustained computation, and amortized hardware cost if using dedicated equipment rather than cloud resources. The time-cost field (key 4) represents the opportunity cost of the wall-clock time required for forgery, since an adversary attempting to forge 10-hour evidence cannot use that time for other purposes, modeled as the economic value of the adversary's time at skilled labor rates. The total-min-cost field (key 5) represents the minimum total cost to forge the evidence combining compute and time costs, serving as the primary metric for cost-benefit analysis by Relying Parties. The cost-per-hour-claimed field (key 6) normalizes forgery cost by claimed authoring duration (calculated as total-min-cost divided by claimed-duration-hours derived from RFC 3339 timestamps), enabling fair comparison across evidence packets of different lengths within the RATS trust framework.
          </t>
        </section>

        <section anchor="fcb-cost-estimate">
          <name>Cost Estimate Structure</name>

          <t>
            Each cost-estimate structure within the economic-bound, encoded in CBOR according to the CDDL schema, includes a point estimate and confidence range to account for uncertainty in adversary resource access within the RATS trust model. The usd field (key 1) contains the point estimate in US dollars, representing the expected cost under typical assumptions about cloud compute pricing and electricity rates for sustaining the sequential SHA-256 operations required by VDF recomputation. The usd-low field (key 2) contains the lower bound of a 90% confidence interval, representing cost assuming the adversary has access to discounted resources such as pre-existing infrastructure, bulk compute contracts, or subsidized electricity that reduce marginal costs. The usd-high field (key 3) contains the upper bound of the 90% confidence interval, representing cost assuming the adversary must acquire resources at full market rates without existing infrastructure or preferential pricing arrangements. The basis field (key 4) contains a human-readable description of the cost calculation basis (e.g., "AWS c7i.large @ $0.085/hr + $0.10/kWh electricity"), enabling Relying Parties to assess whether the cost model assumptions are reasonable for their deployment context and adjust estimates accordingly based on their knowledge of adversary capabilities.
          </t>
        </section>

        <section anchor="fcb-economic-computation">
          <name>Cost Computation</name>

          <t>
            The reference cost computation for compute-cost quantifies the resources required to recompute the VDF chain with its sequential SHA-256 iterations, using the formula: hourly_rate = cloud_rate + elec_rate * power, where cloud_rate represents the cost of compute instances capable of sustained hashing, compute_hours = min_recompute_seconds / 3600 converts the VDF recomputation time to billable hours, and compute_cost_usd = hourly_rate * compute_hours yields the total computational expenditure. The 90% confidence interval assumes 50% rate variance to account for differences in adversary resource access, computed as compute_cost_low = compute_cost_usd * 0.5 for adversaries with discounted access and compute_cost_high = compute_cost_usd * 1.5 for those paying market rates.
          </t>

          <t>
            The reference cost computation for time-cost represents the opportunity cost of wall-clock time required for sequential VDF recomputation, using a skilled labor rate model where hourly_value = 50.0 USD and time_cost_usd = hourly_value * (min_recompute_seconds / 3600), reflecting the economic value of the adversary's time that cannot be used for other purposes during the forgery attempt. The confidence interval for time cost accounts for labor rate variance, computed as time_cost_low = time_cost_usd * 0.2 for adversaries in low-cost labor markets and time_cost_high = time_cost_usd * 4.0 for highly skilled adversaries whose time commands premium rates.
          </t>

          <t>
            These are reference calculations within the RATS framework, and implementations MAY use different cost models appropriate to their deployment context, provided the CBOR encoding follows the CDDL schema and the basis field documents the alternative model for Relying Party assessment.
          </t>
        </section>

      </section>

      <section anchor="fcb-security-statement">
        <name>Security Statement</name>

        <t>
          The security-statement provides a formal claim about evidence security within the RATS architecture, including explicit assumptions about VDF parallelization resistance, SHA-256 preimage resistance, and HMAC binding security, along with caveats that limit the scope of the security claim. The CBOR encoding follows the CDDL schema specified below, providing machine-readable security bounds that Relying Parties can evaluate against their policy requirements while also offering human-readable claims suitable for non-technical stakeholders.
        </t>

        <artwork type="cddl"><![CDATA[
    security-statement = {
        1 => tstr,                 ; claim
        2 => formal-security-bound, ; formal
        3 => [+ tstr],             ; assumptions
        4 => [* tstr],             ; caveats
    }

    formal-security-bound = {
        1 => uint,                 ; min-seconds (integer seconds)
        2 => entropy-decibits,     ; min-entropy (decibits, /10 for bits)
        3 => cost-microdollars,    ; min-cost (microdollars, /1000000 for USD)
    }
    ]]></artwork>

        <section anchor="fcb-statement-fields">
          <name>Field Definitions</name>

          <t>
            The security-statement structure, encoded in CBOR according to the CDDL schema, contains four fields that together provide both human-readable and machine-readable security claims within the RATS trust framework. The claim field (key 1) contains a human-readable security claim that MUST be phrased as a minimum bound rather than an absolute guarantee (e.g., "Forging this evidence requires at minimum 8.3 hours of sequential VDF computation, 67 bits of HMAC entropy prediction, and an estimated $42-$126 in resources"), avoiding language that implies unforgeable or irrefutable guarantees. The formal field (key 2) contains machine-readable security bounds for automated policy evaluation, enabling Relying Parties to programmatically compare evidence packets against their minimum acceptance thresholds without parsing natural language claims. The assumptions field (key 3) contains an array of assumptions under which the security claim holds, which MUST include at minimum a cryptographic assumption (e.g., "SHA-256 preimage resistance"), a hardware assumption (e.g., "TPM 2.0 calibration attestation is accurate"), and an adversary model assumption (e.g., "Adversary cannot parallelize VDF computation"), making explicit the conditions that must hold for the bounds to remain valid. The caveats field (key 4) contains an array of limitations or warnings about the security claim, with typical examples including "Cost estimates based on 2024Q4 cloud pricing", "Entropy estimate assumes timing samples are statistically independent", and "Does not protect against Attesting Environment compromise during evidence generation", enabling Relying Parties to understand the boundaries of the security guarantees.
          </t>
        </section>

        <section anchor="fcb-formal-bounds">
          <name>Formal Security Bound</name>

          <t>
            The formal-security-bound structure, encoded in CBOR according to the CDDL schema, provides three orthogonal minimum requirements for forgery that an adversary must simultaneously overcome to produce fraudulent evidence within the RATS architecture. The min-seconds field (key 1) specifies the minimum wall-clock seconds to forge the evidence, derived from time-bound.min-recompute-seconds which itself reflects the sequential VDF chain recomputation time using SHA-256 iterations that cannot be parallelized. The min-entropy-bits field (key 2) specifies the minimum entropy bits an adversary must predict or generate, derived from entropy-bound.total-entropy-bits which reflects the accumulated behavioral entropy captured through HMAC-SHA256 Jitter Seal commitments. The min-cost-usd field (key 3) specifies the minimum cost in USD to forge the evidence, conservatively derived from economic-bound.total-min-cost.usd-low to provide a lower bound that holds even if the adversary has discounted resource access.
          </t>

          <t>
            Relying Parties within the RATS trust framework can evaluate these CBOR encoded bounds against their risk tolerance through automated policy evaluation. For example, a policy might require: accept_evidence if min-seconds >= 3600 (requiring at least 1 hour of sequential VDF computation) AND min-entropy-bits >= 64 (requiring at least 64 bits of HMAC entropy prediction) AND min-cost-usd >= 100 (requiring at least $100 in forgery resources), with all three conditions enforced simultaneously to provide defense-in-depth against different adversary capabilities.
          </t>
        </section>
      </section>

      <section anchor="fcb-verification-procedure">
        <name>Verification Procedure</name>

        <t>
          A Verifier within the RATS architecture computes and validates forgery cost bounds through a systematic six-step procedure that ensures the claimed security guarantees are mathematically consistent with the cryptographic evidence embedded in the CBOR encoded evidence packet. First, the Verifier computes the time bound by summing VDF iterations across all checkpoints using SHA-256 hash chain verification, retrieving calibration-rate from the COSE signed calibration attestation backed by TPM 2.0 or similar hardware, and computing min-recompute-seconds = total-iterations / calibration-rate to establish the temporal forgery barrier. Second, the Verifier computes the entropy bound by aggregating Min-Entropy (H_min) estimates from all Jitter Seals with their HMAC-SHA256 commitments, verifying VDF entanglement for each seal to confirm replay prevention, and computing brute-force probability as 2^(-total-entropy-bits) to quantify the prediction difficulty. Third, the Verifier computes the economic bound by applying the cost model to the time bound, computing confidence intervals based on assumed adversary resource access, and normalizing by claimed duration derived from RFC 3339 timestamps to enable fair comparison across evidence packets.
        </t>

        <t>
          Fourth, the Verifier constructs the security statement by generating a human-readable claim that describes the minimum VDF recomputation time, HMAC entropy bits, and USD cost required for forgery, populating the formal-security-bound fields for automated policy evaluation, listing applicable cryptographic assumptions about SHA-256 and VDF security, and adding any relevant caveats about cost model staleness or entropy estimation limitations. Fifth, the Verifier validates claimed bounds by comparing the computed bounds against those claimed in the CBOR encoded evidence packet and flagging discrepancies exceeding tolerance, which may indicate either computational errors or potential manipulation of the forgery cost claims. 
        </t>

        <t>
          The Verifier MAY recompute bounds using its own cost model rather than accepting the Attester's claimed bounds encoded in the CDDL schema, and independent recomputation is RECOMMENDED for high-stakes verification within the RATS trust framework where the consequences of accepting forged evidence are significant.
        </t>
      </section>

      <section anchor="fcb-security">
        <name>Security Considerations</name>

        <section anchor="fcb-adversary-model">
          <name>Assumed Adversary Capabilities</name>

          <t>
            Forgery cost bounds within the RATS architecture assume an adversary with specific capabilities that bound the security guarantees provided by VDF chains and HMAC entropy commitments. The assumed adversary has access to commodity hardware at market prices for computing SHA-256 hash iterations, can execute VDF algorithms correctly following the published specifications, cannot parallelize inherently sequential VDFs due to the data dependency between iterations, cannot predict behavioral entropy in advance because human input timing exhibits genuine behavioral randomness, and has not compromised the Attesting Environment during evidence generation such that key material or intermediate state remains protected.
          </t>

          <t>
            The forgery cost bounds encoded in CBOR may not hold against adversaries who exceed these assumed capabilities within the RATS threat model. Adversaries with access to specialized SHA-256 ASICs at below-market cost may achieve lower compute costs than the economic bound assumes, reducing the effective forgery barrier. Adversaries who can compromise the Attesting Environment during evidence generation may extract key material or manipulate VDF computations, bypassing the sequential computation requirement entirely. Adversaries who discover novel cryptanalytic attacks on VDF constructions or hash function security may reduce the effective security below what the bounds indicate. Adversaries with access to quantum computers capable of breaking the cryptographic assumptions underlying SHA-256 preimage resistance may invalidate the security guarantees, though such computers do not currently exist at the scale required for this attack.
          </t>
        </section>

        <section anchor="fcb-bound-limitations">
          <name>Limitations of Cost Bounds</name>

          <t>
            Forgery cost bounds within the RATS architecture provide lower bounds rather than absolute guarantees, with several fundamental limitations that Relying Parties must understand when evaluating CBOR encoded evidence packets. The bounds assume current best-known attacks on VDF constructions and SHA-256 hash functions, meaning future cryptanalytic advances may reduce actual forgery costs below what the security-statement claims, requiring periodic reassessment of evidence security as the cryptographic landscape evolves. The economic estimates depend entirely on cost model assumptions encoded in the CDDL schema, and actual adversary costs may differ significantly based on their specific resource access, geographic location affecting electricity costs, or existing computational infrastructure that reduces marginal costs. The Min-Entropy (H_min) estimates from Jitter Seals assume statistically independent timing samples, but correlations in human input timing data (such as rhythmic typing patterns or predictable pause structures) may reduce effective entropy below the claimed HMAC commitment strength. The time bounds depend critically on calibration accuracy from TPM 2.0 or similar hardware attestation, and without cryptographic hardware attestation the calibration is self-reported by the Attesting Environment and may be manipulated to overstate VDF computation speed, inflating the apparent time bound.
          </t>
        </section>

        <section anchor="fcb-not-guaranteed">
          <name>What Bounds Do NOT Guarantee</name>

          <t>
            Forgery cost bounds within the RATS architecture explicitly do NOT provide certain guarantees that Relying Parties might incorrectly infer from the security claims encoded in CBOR. The bounds do not provide authenticity proof: evidence meeting VDF time thresholds and HMAC entropy thresholds is proven expensive to forge rather than proven authentic, and these are fundamentally distinct claims that must not be conflated in Relying Party policy decisions. The bounds do not provide content verification: the forgery cost analysis using SHA-256 chains says nothing about document content, quality, accuracy, or truthfulness, since only the process evidence describing how the document evolved is bounded rather than the document's substantive claims. The bounds do not provide intent attribution: the COSE signatures and VDF proofs do not prove who created the evidence or why they created it, since identity and intent attribution are outside the scope of cost-asymmetric forgery analysis and require separate attestation mechanisms. 
          </t>
        </section>

        <section anchor="fcb-policy-guidance">
          <name>Policy Guidance for Relying Parties</name>

          <t>
            Relying Parties within the RATS architecture should establish evidence acceptance policies based on four key considerations that translate forgery cost bounds encoded in CBOR into actionable trust decisions. First, risk assessment: What is the cost of accepting forged evidence with manipulated VDF proofs or fabricated HMAC entropy commitments? High-stakes decisions such as legal proceedings, academic credential verification, or financial attestation require proportionally higher cost thresholds in the formal-security-bound to ensure forgery is economically irrational. Second, adversary economics: Would forgery be economically rational given the costs quantified in the economic-bound structure? If VDF recomputation costs using SHA-256 iterations exceed the potential gain from successful forgery, rational adversaries operating within standard economic models will not attempt it, though irrational or ideologically motivated adversaries may still pose risks. Third, time sensitivity: How quickly must evidence be verified given the RFC 3339 timestamps in the evidence packet?  Fourth, corroborating evidence: Cost bounds derived from VDF chains and Jitter Seals are one factor among many in the trust decision, and external anchors such as RFC 3161 timestamps or blockchain anchors, TPM 2.0 hardware attestation, and contextual information about the Attesting Environment all contribute to overall confidence within the RATS trust framework.
          </t>
        </section>
      </section>
  </section>


    <!-- Section 7: Cross-Document Provenance Links -->
    <section anchor="provenance-links"
            >
      <name>Cross-Document Provenance Links</name>

      <t>
        This section defines a mechanism for establishing cryptographic relationships between Evidence packets within the RATS <xref target="RFC9334"/> architecture, with provenance links encoded in CBOR <xref target="RFC8949"/> according to CDDL <xref target="RFC8610"/> schemas that enable cross-document attestation. Provenance links enable authors to cryptographically prove that one document evolved from, merged with, or was derived from other documented works by referencing the SHA-256 <xref target="RFC6234"/> chain hashes and UUID <xref target="RFC9562"/> identifiers of parent evidence packets, creating a verifiable derivation graph that maintains the tamper-evidence properties of the underlying VDF chains while extending attestation across document boundaries.
      </t>

      <section anchor="provenance-motivation">
        <name>Motivation</name>

        <t>
          Real-world authorship rarely occurs in isolation, and the RATS architecture must accommodate the complex evolution patterns that characterize genuine creative and scholarly work. Documents evolve through multiple stages where research notes with their own VDF chains and HMAC entropy commitments become draft papers with additional SHA-256 segment-based Merkle trees which in turn become published articles with final attestation, multiple contributors merge their independently-attested sections (each with distinct COSE signatures) into a collaborative work requiring unified provenance tracking, thesis chapters are extracted and expanded into standalone papers that should cryptographically reference their source material via UUID links, and codebases are forked with their evidence packets serving as the basis for derivative works that need verifiable connection to their origins.
        </t>

        <t>
          Without provenance links, each Evidence packet encoded in CBOR is cryptographically isolated despite representing interconnected creative work. An author cannot prove that their final manuscript evolved from the lab notes they documented six months earlier, even though both have valid VDF proofs and Jitter Seal entropy commitments using HMAC-SHA256, because the evidence packets lack cryptographic linkage. Provenance links provide this capability within the RATS framework while maintaining the privacy and security properties of the underlying evidence model, enabling Relying Parties to verify not only individual evidence packets but also the derivation relationships between them.
        </t>
      </section>

      <section anchor="provenance-section-structure">
        <name>Provenance Section Structure</name>

        <t>
          The provenance section is an optional component of the Evidence packet encoded in CBOR, identified by integer key 20 in the CDDL schema and signed via COSE as part of the overall evidence envelope. When present, it documents the cryptographic relationship between the current Evidence packet and one or more parent packets by referencing their UUID identifiers and SHA-256 chain hashes, enabling Relying Parties within the RATS architecture to verify derivation claims by fetching and validating the referenced parent evidence with its VDF proofs and HMAC entropy commitments.
        </t>

        <sourcecode type="cddl"><![CDATA[
    ; Provenance section for cross-document linking
    ; Key 20 in evidence-packet
    provenance-section = {
        ? 1 => [+ provenance-link],     ; parent-links
        ? 2 => [+ derivation-claim],    ; derivation-claims
        ? 3 => provenance-metadata,     ; metadata
    }

    ; Link to a parent Evidence packet
    provenance-link = {
        1 => uuid,                       ; parent-packet-id
        2 => hash-value,                 ; parent-chain-hash
        3 => derivation-type,           ; how this document relates
        4 => pop-timestamp,             ; when derivation occurred
        ? 5 => tstr,                    ; relationship-description
        ? 6 => [+ uint],                ; inherited-checkpoints
        ? 7 => cose-signature,          ; cross-packet-attestation
    }

    ; Type of derivation relationship
    derivation-type = &(
        continuation: 1,                 ; same work, new packet
        merge: 2,                        ; from multiple sources
        split: 3,                        ; Extracted from larger work
        rewrite: 4,                      ; Substantial revision
        translation: 5,                  ; Language translation
        fork: 6,                         ; independent branch
        citation-only: 7,                ; references only
    )

    ; Claims about what was derived and how
    derivation-claim = {
        1 => derivation-aspect,          ; what-derived
        2 => derivation-extent,          ; extent
        ? 3 => tstr,                     ; description
        ? 4 => uint .le 100,             ; estimated-percentage (0-100)
    }

    derivation-aspect = &(
        structure: 1,                    ; Document organization
        content: 2,                      ; Textual content
        ideas: 3,                        ; Conceptual elements
        data: 4,                         ; Data or results
        methodology: 5,                  ; Methods or approach
        code: 6,                         ; Source code
    )

    derivation-extent = &(
        none: 0,                         ; Not derived
        minimal: 1,                      ; Less than 10%
        partial: 2,                      ; 10-50%
        substantial: 3,                  ; 50-90%
        complete: 4,                     ; More than 90%
    )

    ; Optional metadata about provenance
    provenance-metadata = {
        ? 1 => tstr,                     ; provenance-statement
        ? 2 => bool,                     ; all-parents-available
        ? 3 => [+ tstr],                 ; missing-parent-reasons
    }
    ]]></sourcecode>
      </section>

      <section anchor="provenance-verification">
        <name>Verification of Provenance Links</name>

        <t>
          Verifiers MUST perform the following checks when provenance links
          are present:
        </t>

        <section anchor="provenance-chain-hash-verification">
          <name>Parent Chain Hash Verification</name>

          <t>
            For each provenance-link, if the parent Evidence packet is
            available:
          </t>

          <ol>
            <li>
              Verify that parent-packet-id matches the packet-id field of
              the parent Evidence packet.
            </li>
            <li>
              Verify that parent-chain-hash matches the tree-root of
              the final checkpoint in the parent Evidence packet.
            </li>
            <li>
              Verify that the derivation timestamp is not earlier than the
              created timestamp of the parent packet.
            </li>
          </ol>

          <t>
            If the parent Evidence packet is not available, the Verifier
            SHOULD note this limitation in the Attestation Result caveats.
            The provenance link remains valid but unverified.
          </t>
        </section>

        <section anchor="provenance-cross-attestation">
          <name>Cross-Packet Attestation</name>

          <t>
            When cross-packet-attestation is present, it provides
            cryptographic proof that the author of the current packet had
            access to the parent packet at the time of derivation:
          </t>

          <artwork><![CDATA[
    cross-packet-attestation = COSE_Sign1(
        payload = CBOR_encode({
            1: current-packet-id,
            2: parent-packet-id,
            3: parent-chain-hash,
            4: derivation-timestamp,
        }),
        key = author-signing-key
    )
    ]]></artwork>

          <t>
            This attestation prevents retroactive provenance claims where
            an author discovers an existing Evidence packet and falsely
            claims derivation after the fact.
          </t>
        </section>
      </section>

      <section anchor="provenance-privacy">
        <name>Privacy Considerations for Provenance</name>

        <t>
          Provenance links may reveal information about the author's
          creative process and document history. Authors SHOULD consider:
        </t>

        <ul>
          <li>
            Parent packet IDs are disclosed to anyone with access to the
            child packet.
          </li>
          <li>
            If parent packets use the author-salted hash mode, the salt
            MUST be shared for full verification.
          </li>
          <li>
            Derivation claims may reveal collaboration patterns or
            research relationships.
          </li>
        </ul>

        <t>
          Authors MAY choose to omit provenance links for privacy while
          still maintaining independent Evidence for each document.
        </t>
      </section>

      <section anchor="provenance-examples">
        <name>Provenance Link Examples</name>

        <section anchor="provenance-example-continuation">
          <name>Continuation Example</name>

          <t>
            A dissertation written over 18 months with monthly Evidence
            exports:
          </t>

          <artwork type="cbor-diag"><![CDATA[
        {1: 1, 2: 3, 3: "Structure from Alice's draft"},
        {1: 2, 2: 2, 3: "Content merged from all three"},
        {1: 4, 2: 4, 3: "Data primarily from Bob"}
      ]
    }
    ]]></artwork>
        </section>
      </section>
  </section>


    <!-- Section 8: Incremental Evidence with Continuation Tokens -->
    <section anchor="continuation-tokens"
            >
      <name>Incremental Evidence with Continuation Tokens</name>

      <t>
        This section defines a mechanism for producing Evidence packets
        incrementally over extended authoring periods. Continuation tokens
        allow a single logical authorship effort to be documented across
        multiple Evidence packets without losing cryptographic continuity.
      </t>

      <section anchor="continuation-motivation">
        <name>Motivation for Continuation Tokens</name>

        <t>
          Long-form works such as novels, dissertations, or technical books
          may span months or years of active authorship. Capturing all
          Evidence in a single packet presents practical challenges:
        </t>

        <ul>
          <li>
            Unbounded segment-based Merkle trees consume storage and increase
            verification time.
          </li>
          <li>
            Authors may need to share partial Evidence before work
            completion (e.g., chapter submissions, progress reports).
          </li>
          <li>
            System failures or device changes could result in loss of
            accumulated Evidence.
          </li>
          <li>
            Privacy requirements may dictate periodic Evidence export
            and local data deletion.
          </li>
        </ul>

        <t>
          Continuation tokens address these challenges by enabling
          cryptographically-linked Evidence packet chains while preserving
          independent verifiability of each packet.
        </t>
      </section>

      <section anchor="continuation-structure">
        <name>Continuation Token Structure</name>

        <t>
          The continuation token is an optional component of the Evidence
          packet, identified by integer key 21. It establishes the packet's
          position within a multi-packet Evidence series.
        </t>

        <sourcecode type="cddl"><![CDATA[
    ; Continuation token for multi-packet Evidence series
    ; Key 21 in evidence-packet
    continuation-section = {
        1 => uuid,                       ; series-id
        2 => uint,                       ; packet-sequence
        ? 3 => hash-value,               ; prev-packet-chain-hash
        ? 4 => uuid,                     ; prev-packet-id
        5 => continuation-summary,       ; cumulative-summary
        ? 6 => cose-signature,           ; series-binding-signature
    }

    ; Cumulative statistics across the series
    continuation-summary = {
        1 => uint,                       ; total-checkpoints-so-far
        2 => uint,                       ; total-chars-so-far
        3 => duration,                   ; total-vdf-time-so-far
        4 => entropy-decibits,           ; total-entropy-so-far (decibits)
        5 => uint,                       ; packets-in-series
        ? 6 => pop-timestamp,            ; series-started-at
        ? 7 => duration,                 ; total-elapsed-time
    }
    ]]></sourcecode>

        <t>
          Key semantics:
        </t>

        <dl>
          <dt>series-id:</dt>
          <dd>
            A UUID that remains constant across all packets in the series.
            Generated when the first packet in the series is created.
          </dd>

          <dt>packet-sequence:</dt>
          <dd>
            Zero-indexed sequence number. The first packet in a series has
            packet-sequence = 0.
          </dd>

          <dt>prev-packet-chain-hash:</dt>
          <dd>
            The tree-root of the final checkpoint in the previous
            packet. MUST be present for packet-sequence &gt; 0. MUST NOT be
            present for packet-sequence = 0.
          </dd>

          <dt>prev-packet-id:</dt>
          <dd>
            The packet-id of the previous packet in the series. SHOULD be
            present for packet-sequence &gt; 0 to enable packet retrieval.
          </dd>

          <dt>cumulative-summary:</dt>
          <dd>
            Running totals across all packets in the series, enabling
            Verifiers to assess the full authorship effort without
            accessing all prior packets.
          </dd>
        </dl>
      </section>

      <section anchor="continuation-chain-integrity">
        <name>Chain Integrity Across Packets</name>

        <t>
          When a new packet continues from a previous packet, the VDF
          chain MUST maintain cryptographic continuity:
        </t>

        <artwork><![CDATA[
    Packet N (final checkpoint):
      tree-root[last] = H(checkpoint-data)
      VDF_output{last} = computed VDF result

    Packet N+1 (first checkpoint):
      prev-packet-chain-hash = tree-root[last] from Packet N
      VDF_input{0} = H(
          VDF_output{last} from Packet N ||
          content-hash{0} ||
          jitter-commitment{0} ||
          series-id ||
          packet-sequence
      )
    ]]></artwork>

        <t>
          This construction ensures:
        </t>

        <ol>
          <li>
            The new packet cannot be created without knowledge of the
            previous packet's final VDF output.
          </li>
          <li>
            Backdating the new packet requires recomputing all VDF proofs
            in both the current and all subsequent packets.
          </li>
          <li>
            The series-id and packet-sequence are bound into the VDF chain,
            preventing packets from being reordered or reassigned to
            different series.
          </li>
        </ol>
      </section>

      <section anchor="continuation-verification">
        <name>Verification of Continuation Chains</name>

        <section anchor="continuation-single-packet">
          <name>Single Packet Verification</name>

          <t>
            Each packet in a continuation series MUST be independently
            verifiable. A Verifier with access only to packet N can:
          </t>

          <ul>
            <li>
              Verify all segment chain integrity within the packet.
            </li>
            <li>
              Verify all VDF proofs within the packet.
            </li>
            <li>
              Verify jitter bindings within the packet.
            </li>
            <li>
              Report the cumulative-summary as claimed (not proven without
              prior packets).
            </li>
          </ul>

          <t>
            The Attestation Result SHOULD note that the packet is part of
            a series and whether prior packets were verified.
          </t>
        </section>

        <section anchor="continuation-full-series">
          <name>Full Series Verification</name>

          <t>
            When all packets in a series are available, a Verifier MUST:
          </t>

          <ol>
            <li>
              Verify each packet independently.
            </li>
            <li>
              Verify that series-id is consistent across all packets.
            </li>
            <li>
              Verify that packet-sequence values are consecutive starting
              from 0.
            </li>
            <li>
              For each packet N &gt; 0, verify that prev-packet-chain-hash
              matches the final tree-root of packet N-1.
            </li>
            <li>
              For each packet N &gt; 0, verify that the first checkpoint's
              VDF_input incorporates the previous packet's final VDF_output.
            </li>
            <li>
              Verify that cumulative-summary values are consistent with
              the sum of individual packet statistics.
            </li>
          </ol>
        </section>
      </section>

      <section anchor="continuation-series-binding">
        <name>Series Binding Signature</name>

        <t>
          The optional series-binding-signature provides cryptographic
          proof that all packets in a series were produced by the same
          author:
        </t>

        <artwork><![CDATA[
    series-binding-signature = COSE_Sign1(
        payload = CBOR_encode({
            1: series-id,
            2: packet-sequence,
            3: packet-id,
            4: prev-packet-chain-hash,  / if present /
            5: cumulative-summary,
        }),
        key = author-signing-key
    )
    ]]></artwork>

        <t>
          When present, Verifiers can confirm that the signing key is
          consistent across all packets in the series, providing additional
          assurance of authorship continuity.
        </t>
      </section>

      <section anchor="continuation-practical">
        <name>Practical Considerations</name>

        <section anchor="continuation-export-triggers">
          <name>When to Export a Continuation Packet</name>

          <t>
            Implementations SHOULD support configurable triggers for
            continuation packet export:
          </t>

          <ul>
            <li>
              <strong>Checkpoint count threshold:</strong> Export after N
              checkpoints (e.g., 1000).
            </li>
            <li>
              <strong>Time interval:</strong> Export weekly or monthly.
            </li>
            <li>
              <strong>Document size threshold:</strong> Export when document
              exceeds N characters.
            </li>
            <li>
              <strong>Manual trigger:</strong> User-initiated export.
            </li>
            <li>
              <strong>Milestone events:</strong> Export at chapter
              completion or version milestones.
            </li>
          </ul>
        </section>

        <section anchor="continuation-gap-handling">
          <name>Handling Gaps in Series</name>

          <t>
            If a packet in a series is lost or unavailable:
          </t>

          <ul>
            <li>
              Subsequent packets remain independently verifiable.
            </li>
            <li>
              The cumulative-summary provides claimed totals but cannot
              be proven without all packets.
            </li>
            <li>
              Verifiers MUST note the gap in Attestation Results.
            </li>
            <li>
              Chain continuity verification fails at the gap but resumes
              for subsequent contiguous packets.
            </li>
          </ul>
        </section>
      </section>

      <section anchor="continuation-example">
        <name>Continuation Token Example</name>

        <t>
          Third monthly export of a dissertation in progress:
        </t>

        <artwork type="cbor-diag"><![CDATA[
    continuation-section = {
      1: h'dissertation-series-uuid...',  / series-id /
      2: 2,                           / packet-sequence (3rd) /
      3: {                                 / prev-packet-chain-hash /
        1: 1,
        2: h'feb-packet-final-hash...'
      },
      4: h'feb-packet-uuid...',            / prev-packet-id /
      5: {                                 / cumulative-summary /
        1: 847,                            / total-checkpoints-so-far /
        2: 45230,                          / total-chars-so-far /
        3: 12600.0,                        / total-vdf-time: ~3.5 hours /
        4: 156.7,                          / total-entropy-bits /
        5: 3,                              / packets-in-series /
        6: 1(1704067200),              / series-started-at /
        7: 7776000.0                       / total-elapsed: 90 days /
      },
      6: h'D28441A0...'                     / series-binding-signature /
    }
    ]]></artwork>
      </section>
  </section>




    <!-- Section 11: Quantified Trust Policies -->
    <section anchor="trust-policies"
            >
      <name>Quantified Trust Policies</name>

      <t>
        This section defines a framework for expressing and computing
        trust scores in Attestation Results. Trust policies enable
        Relying Parties to customize how Evidence is evaluated and
        to understand the basis for confidence scores.
      </t>

      <section anchor="trust-motivation">
        <name>Trust Policy Motivation</name>

        <t>
          The base attestation-result structure provides a confidence-score
          (0.0-1.0) and a verdict enumeration, but does not explain how
          these values were computed. Different Relying Parties have
          different trust requirements:
        </t>

        <ul>
          <li>
            An academic journal may weight presence challenges heavily.
          </li>
        </ul>

        <t>
          The trust policy framework addresses these limitations by
          making confidence computation transparent and configurable.
        </t>
      </section>

      <section anchor="trust-structure">
        <name>Trust Policy Structure</name>

        <t>
          The appraisal-policy extension is added to verifier-metadata,
          identified by integer key 5.
        </t>

        <sourcecode type="cddl"><![CDATA[
    ; Extended verifier-metadata with trust policy
    verifier-metadata = {
        ? 1 => tstr,                     ; verifier-version
        ? 2 => tstr,                     ; verifier-uri
        ? 3 => [+ bstr],                 ; verifier-cert-chain
        ? 4 => tstr,                     ; policy-id
        ? 5 => appraisal-policy,         ; policy details
    }

    ; Complete appraisal policy specification
    appraisal-policy = {
        1 => tstr,                       ; policy-uri
        2 => tstr,                       ; policy-version
        3 => trust-computation,          ; computation-model
        4 => [+ trust-factor],           ; factors
        ? 5 => [+ trust-threshold],      ; thresholds
        ? 6 => policy-metadata,          ; metadata
    }

    ; How the final score is computed
    trust-computation = &(
        weighted-average: 1,             ; Sum of (factor * weight)
        minimum-of-factors: 2,           ; Min across all factors
        geometric-mean: 3,               ; Nth root of product
        custom-formula: 4,               ; Described in policy-uri
    )

    ; Individual factor in trust computation
    trust-factor = {
        1 => tstr,                       ; factor-name
        2 => factor-type,                ; type
        3 => ratio-millibits,            ; weight (0-1000 = 0.0-1.0)
        4 => int,                        ; observed-value (units vary by type)
        5 => ratio-millibits,            ; normalized-score (0-1000 = 0.0-1.0)
        6 => ratio-millibits,            ; contribution (weight * score)
        ? 7 => factor-evidence,          ; supporting-evidence
    }

    factor-type = &(
        ; Chain-verifiable factors
        vdf-duration: 1,
        checkpoint-count: 2,
        jitter-entropy: 3,
        chain-integrity: 4,
        revision-depth: 5,

        ; Presence factors
        presence-rate: 10,
        presence-response-time: 11,

        ; Hardware factors
        hardware-attestation: 20,
        calibration-attestation: 21,

        ; Behavioral factors
        edit-entropy: 30,
        monotonic-ratio: 31,
        typing-rate-consistency: 32,

        ; External factors
        anchor-confirmation: 40,
        anchor-count: 41,

        ; Collaboration factors
        collaborator-attestations: 50,
        contribution-consistency: 51,
    )

    ; Evidence supporting a factor score
    factor-evidence = {
        ? 1 => int,                      ; raw-value (units vary by factor)
        ? 2 => int,                      ; threshold-value (same units as raw)
        ? 3 => tstr,                     ; computation-notes
        ? 4 => [uint, uint],             ; checkpoint-range
    }

    ; Threshold requirements for pass/fail determination
    trust-threshold = {
        1 => tstr,                       ; threshold-name
        2 => threshold-type,             ; type
        3 => ratio-millibits,            ; required-value (0-1000 for scores)
        4 => bool,                       ; met
        ? 5 => tstr,                     ; failure-reason
    }

    threshold-type = &(
        minimum-score: 1,                ; Score must be >= value
        minimum-factor: 2,               ; factor >= value
        required-factor: 3,              ; factor present
        maximum-caveats: 4,              ; caveats <= value
    )

    policy-metadata = {
        ? 1 => tstr,                     ; policy-name
        ? 2 => tstr,                     ; policy-description
        ? 3 => tstr,                     ; policy-authority
        ? 4 => pop-timestamp,            ; policy-effective-date
        ? 5 => [+ tstr],                 ; applicable-domains
    }
    ]]></sourcecode>
      </section>

      <section anchor="trust-computation-models">
        <name>Trust Computation Models</name>

        <section anchor="trust-weighted-average">
          <name>Weighted Average Model</name>

          <t>
            The weighted average model represents the most common computation approach within the RATS appraisal policy framework, where each trust factor derived from VDF proofs, HMAC entropy commitments, and SHA-256 chain integrity contributes proportionally to its assigned weight in the CBOR encoded policy structure defined by the CDDL schema:
          </t>

          <artwork><![CDATA[
    confidence-score = sum(factor[i].weight * factor[i].normalized-score)
                       / sum(factor[i].weight)

    Constraints:
      - sum(weights) SHOULD equal 1.0 for clarity
      - All normalized-scores are in [0.0, 1.0]
      - Resulting confidence-score is in [0.0, 1.0]

    Example:
      vdf-duration:      weight=0.30, score=0.95, contribution=0.285
      jitter-entropy:    weight=0.25, score=0.80, contribution=0.200
      presence-rate:     weight=0.20, score=1.00, contribution=0.200
      chain-integrity:   weight=0.15, score=1.00, contribution=0.150
      hardware-attest:   weight=0.10, score=0.00, contribution=0.000

      confidence-score = 0.285 + 0.200 + 0.200 + 0.150 + 0.000 = 0.835
    ]]></artwork>
        </section>

        <section anchor="trust-minimum-model">
          <name>Minimum-of-Factors Model</name>

          <t>
            The minimum-of-factors model represents a conservative computation approach within the RATS appraisal framework where the overall confidence score is limited by the weakest factor, computed as: confidence-score = min(factor[i].normalized-score for all i). This model ensures that deficiencies in any single trust dimension (whether VDF duration, Jitter Seal entropy via HMAC, presence verification, SHA-256 chain integrity, or TPM 2.0 <xref target="TPM2.0"/> hardware attestation) will dominate the final assessment. For example, given vdf-duration score=0.95, jitter-entropy score=0.80, presence-rate score=1.00, chain-integrity score=1.00, and hardware-attest score=0.00 (the limiting factor), the resulting confidence-score equals 0.00 because the absence of hardware attestation bounds the overall trust regardless of strong VDF and HMAC evidence.
          </t>

          <t>
            This CBOR encoded policy model is appropriate for high-security RATS deployments where any weakness in the evidence chain should disqualify the Evidence packet entirely, such as forensic investigations, legal proceedings requiring COSE signed attestations, or high-stakes academic integrity verification where the cost of accepting forged evidence exceeds the cost of false rejection.
          </t>
        </section>

        <section anchor="trust-geometric-mean">
          <name>Geometric Mean Model</name>

          <t>
            The geometric mean model provides a balanced computation approach within the RATS appraisal framework that penalizes outliers more heavily than weighted average but less severely than the minimum-of-factors model, computed as: confidence-score = (product(factor[i].normalized-score))^(1/n) where n is the number of trust factors derived from VDF proofs, HMAC entropy, SHA-256 chain integrity, and other evidence dimensions. For example with 5 factors encoded in the CBOR policy structure: given scores = [0.95, 0.80, 1.00, 1.00, 0.60] representing vdf-duration, jitter-entropy, presence-rate, chain-integrity, and hardware-attestation respectively, the product = 0.95 * 0.80 * 1.00 * 1.00 * 0.60 = 0.456, yielding confidence-score = 0.456^(1/5) = 0.838 which maintains reasonable overall confidence despite one weak factor while still penalizing the deficiency more than a simple weighted average would according to the CDDL schema.
          </t>
        </section>
      </section>

      <section anchor="trust-normalization">
        <name>Factor Normalization</name>

        <t>
          Raw factor values derived from VDF proofs, HMAC entropy estimates, SHA-256 chain verification, and other evidence dimensions must be normalized to the [0.0, 1.0] range for consistent computation within the RATS appraisal framework, with normalization functions depending on the factor type as encoded in the CBOR policy structure according to the CDDL schema:
        </t>

        <section anchor="trust-normalize-threshold">
          <name>Threshold Normalization</name>

          <artwork><![CDATA[
    For factors with a minimum threshold:
      if raw_value >= threshold:
          normalized = 1.0
      else:
          normalized = raw_value / threshold

    Example: vdf-duration with 3600s threshold
      raw_value = 2700s
      normalized = 2700 / 3600 = 0.75
    ]]></artwork>
        </section>

        <section anchor="trust-normalize-range">
          <name>Range Normalization</name>

          <artwork><![CDATA[
    For factors with min/max range:
      normalized = (raw_value - min) / (max - min)
      normalized = clamp(normalized, 0.0, 1.0)

    Example: typing-rate with acceptable range 20-200 WPM
      raw_value = 75 WPM
      normalized = (75 - 20) / (200 - 20) = 0.306
    ]]></artwork>
        </section>

        <section anchor="trust-normalize-binary">
          <name>Binary Normalization</name>

          <artwork><![CDATA[
    For pass/fail factors:
      normalized = 1.0 if present/valid else 0.0

    Example: hardware-attestation
      TPM attestation present and valid: normalized = 1.0
      No hardware attestation: normalized = 0.0
    ]]></artwork>
        </section>
      </section>

      <section anchor="trust-predefined-policies">
        <name>Predefined Policy Profiles</name>

        <t>
          This RATS profile specification defines several policy profiles for common use cases, with each profile encoded in CBOR according to the CDDL schema and specifying how to weight VDF duration, HMAC Jitter Seal entropy, SHA-256 chain integrity, and TPM 2.0 hardware attestation factors. Implementations MAY support these predefined profiles by URI reference:
        </t>

        <table anchor="tbl-policy-profiles">
          <name>Predefined Policy Profiles</name>
          <thead>
            <tr>
              <th>Profile URI</th>
              <th>Description</th>
              <th>Key Characteristics</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>urn:ietf:params:pop:policy:basic</td>
              <td>Basic verification</td>
              <td>Chain integrity only</td>
            </tr>
            <tr>
              <td>urn:ietf:params:pop:policy:academic</td>
              <td>Academic submission</td>
              <td>Weighted average, presence required</td>
            </tr>
            <tr>
              <td>urn:ietf:params:pop:policy:legal</td>
              <td>Legal proceedings</td>
              <td>Minimum model, hardware required</td>
            </tr>
            <tr>
              <td>urn:ietf:params:pop:policy:publishing</td>
              <td>Publishing workflow</td>
              <td>Weighted average, VDF emphasized</td>
            </tr>
          </tbody>
        </table>
      </section>

      <section anchor="trust-example">
        <name>Trust Policy Example</name>

        <t>
          The following example demonstrates an academic policy within the RATS architecture applied to a Standard tier Evidence packet, encoded in CBOR diagnostic notation according to the CDDL schema, with trust factors derived from VDF duration, HMAC Jitter Seal entropy, presence verification, SHA-256 chain integrity, and edit entropy normalized to the [0.0, 1.0] range:
        </t>

        <artwork type="cbor-diag"><![CDATA[
    verifier-metadata = {
      1: "witnessd-verifier-2.0",
      2: "https://verify.example.com",
      4: "academic-v1",
      5: {  / appraisal-policy /
        1: "urn:ietf:params:pop:policy:academic",
        2: "1.0.0",
        3: 1,  / computation: weighted-average /
        4: [   / factors (using millibits: 1000 = 1.0) /
          {
            1: "vdf-duration",
            2: 1,
            3: 250,                      / weight: 250/1000 = 0.25 /
            4: 5400,                     / observed: 90 minutes (seconds) /
            5: 1000,                     / normalized: 1000/1000 = 1.0 /
            6: 250,                      / contribution: 250 * 1000 / 1000 /
            7: {1: 5400, 2: 3600}        / raw, threshold (seconds) /
          },
          {
            1: "jitter-entropy",
            2: 3,
            3: 200,                      / weight: 0.20 /
            4: 457,                      / observed: 45.7 bits (decibits) /
            5: 1000,                     / normalized: 1.0 /
            6: 200                       / contribution: 0.20 /
          },
          {
            1: "presence-rate",
            2: 10,
            3: 250,                      / weight: 0.25 /
            4: 917,                      / observed: 11/12 = 0.917 (millibits) /
            5: 917,                      / normalized: direct ratio /
            6: 229                       / contribution: 250 * 917 / 1000 /
          },
          {
            1: "chain-integrity",
            2: 4,
            3: 200,                      / weight: 0.20 /
            4: 1000,                     / binary valid = 1.0 (millibits) /
            5: 1000,                     / normalized: 1.0 /
            6: 200                       / contribution: 0.20 /
          },
          {
            1: "edit-entropy",
            2: 30,
            3: 100,                      / weight: 0.10 /
            4: 35,                       / observed: 3.5 bits (decibits) /
            5: 863,                      / normalized: 0.863 (millibits) /
            6: 86                        / contribution: 100 * 863 / 1000 /
          }
        ],
        5: [   / thresholds (millibits) /
          {
            1: "minimum-overall",
            2: 1,
            3: 700,                      / required: 700/1000 = 0.70 /
            4: true
          },
          {
            1: "presence-required",
            2: 3,
            3: 0,                        / any presence suffices /
            4: true
          }
        ],
        6: {   / metadata /
          1: "Academic Submission Policy",
          3: "WritersLogic Academic Integrity",
          5: ["academic", "education", "research"]
        }
      }
    }

    / confidence: (250 + 200 + 229 + 200 + 86) / 1000 = 965/1000 = 0.965 /
    ]]></artwork>
      </section>
  </section>


    <!-- Section 12: Compact Evidence References -->
    <section anchor="compact-evidence"
            >
      <name>Compact Evidence References</name>

      <t>
        This section defines a compact representation of Evidence within the RATS architecture that can be embedded in document metadata or other space-constrained data structures where full CBOR encoded Evidence packets would exceed available capacity. Compact Evidence References provide a cryptographic link via SHA-256 hashes and COSE signatures to full Evidence packets containing VDF proofs and HMAC Jitter Seals, without requiring the full packet with its complete segment chain to be transmitted or stored in the constrained embedding context.
      </t>

      <section anchor="compact-motivation">
        <name>Compact Reference Motivation</name>

        <t>
          Full Evidence packets encoded in CBOR according to the CDDL schema can be large (kilobytes to megabytes depending on segment count and VDF proof size), making them unsuitable for direct inclusion in space-constrained document headers or metadata fields.

        </t>

        <t>
          A Compact Evidence Reference within the RATS architecture provides "proof at a glance" that links to the full Evidence packet containing complete VDF chains and HMAC Jitter Seals for verification. The reference is cryptographically bound to the Evidence through SHA-256 hashes of the chain and document, with a COSE signature preventing tampering with the summary claims without detection by Relying Parties.
        </t>
      </section>

      <section anchor="compact-structure">
        <name>Compact Reference Structure</name>

        <t>
          The Compact Evidence Reference within the RATS architecture uses a dedicated CBOR semantic tag (1347571281 = 0x50505021 = "PPP!") to distinguish it from full Evidence packets containing complete VDF chains and HMAC Jitter Seals, enabling parsers to immediately identify the compact format and locate the referenced full packet via the evidence-uri field for complete verification with SHA-256 chain validation and COSE signature checking.
        </t>

        <sourcecode type="cddl"><![CDATA[
    ; Compact Evidence Reference
    ; Tag 1347571281 = 0x50505021 = "PPP!"
    tagged-compact-ref = #6.1347571281(compact-evidence-ref)

    compact-evidence-ref = {
        1 => uuid,                       ; packet-id
        2 => hash-value,                 ; chain-hash
        3 => hash-value,                 ; document-hash
        4 => compact-summary,            ; summary
        5 => tstr,                       ; evidence-uri
        6 => cose-signature,             ; compact-signature
        ? 7 => compact-metadata,         ; metadata
    }

    compact-summary = {
        1 => uint,                       ; checkpoint-count
        2 => uint,                       ; total-chars
        3 => duration,                   ; total-vdf-time
        4 => uint,                       ; evidence-tier (1-4)
        ? 5 => forensic-assessment,      ; verdict (if available)
        ? 6 => confidence-millibits,     ; confidence (0-1000 = 0.0-1.0)
    }

    compact-metadata = {
        ? 1 => tstr,                     ; author-name
        ? 2 => pop-timestamp,            ; created
        ? 3 => tstr,                     ; verifier-name
        ? 4 => pop-timestamp,            ; verified-at
    }
    ]]></sourcecode>
      </section>

      <section anchor="compact-signature">
        <name>Compact Reference Signature</name>

        <t>
          The compact-signature within the RATS architecture binds all reference fields using COSE_Sign1 to prevent tampering with the summary claims without detection by Relying Parties. The signature is computed as COSE_Sign1(payload = CBOR_encode({1: packet-id (UUID <xref target="RFC9562"/>), 2: chain-hash (SHA-256), 3: document-hash (SHA-256), 4: compact-summary, 5: evidence-uri}), key = signing-key), cryptographically binding the reference to both the document content and the full Evidence packet containing VDF proofs and HMAC Jitter Seals.
        </t>

        <t>
          The signing key for the COSE signature may be the author's signing key (self-attestation where the author vouches for their own evidence), the Verifier's signing key (third-party attestation after independent verification of the VDF chain and SHA-256 integrity), or the evidence service's key (hosting attestation where the service vouches for packet availability and immutability). The signature type SHOULD be indicated by the COSE key identifier (kid) header or inferred from the evidence-uri domain within the RATS trust framework.
        </t>
      </section>

      <section anchor="compact-verification">
        <name>Verification of Compact References</name>

        <section anchor="compact-verify-reference">
          <name>Reference-Only Verification</name>

          <t>
            Without fetching the full Evidence packet containing VDF proofs and HMAC Jitter Seals, a Verifier within the RATS architecture can perform reference-only verification by: (1) verifying the COSE compact-signature is valid using the signer's public key, (2) identifying the signer (author, third-party verifier, or evidence hosting service) from the COSE key identifier, (3) checking that evidence-uri points to a trusted source for fetching the full CBOR encoded Evidence if needed, and (4) displaying the compact-summary to the user showing segment count, total characters, VDF duration, and evidence tier.
          </t>

          <t>
            This reference-only verification provides basic assurance within the RATS trust framework that Evidence exists and was attested by a known party whose COSE signature is valid, without requiring full verification of the SHA-256 segment chain, VDF proofs, or HMAC entropy commitments.
          </t>
        </section>

        <section anchor="compact-verify-full">
          <name>Full Verification via URI</name>

          <t>
            For complete verification within the RATS architecture, the Verifier follows a six-step procedure that validates both the compact reference and the full CBOR encoded Evidence packet: (1) fetch the Evidence packet from evidence-uri using HTTPS or other secure transport, (2) verify that packet-id (UUID) matches between the compact reference and fetched packet, (3) verify that chain-hash (SHA-256) matches the final tree-root in the fetched Evidence, (4) verify that document-hash (SHA-256) matches the document-ref content-hash binding the evidence to the attested document, (5) perform full Evidence verification per this specification including VDF proof recomputation, HMAC entropy verification, and COSE signature validation, and (6) verify that compact-summary values (checkpoint-count, total-chars, total-vdf-time, evidence-tier) match the actual Evidence computed values.
          </t>

          <t>
            Discrepancies between the compact reference and the fetched Evidence MUST cause verification to fail within the RATS trust framework, as such discrepancies indicate either tampering with the compact reference, corruption of the full Evidence packet, or a mismatch between the referenced and fetched packets.
          </t>
        </section>
      </section>

      <section anchor="compact-encoding">
        <name>Encoding Formats</name>

        <t>
          Compact Evidence References within the RATS architecture may be encoded in several formats depending on the embedding context, with the base representation being CBOR according to the CDDL schema, but with transformations available for contexts requiring text encoding, URL-safe encoding, or human-readable representation while preserving the cryptographic binding via SHA-256 hashes and COSE signatures:
        </t>

        <section anchor="compact-encoding-cbor">
          <name>CBOR Encoding</name>

          <t>
            The native format is CBOR with the 0x50505021 tag. This is
            the most compact binary representation, suitable for:
          </t>

          <ul>
            <li>Binary metadata fields</li>
            <li>Protocol messages</li>
            <li>Database storage</li>
          </ul>

          <t>
            Typical size: 150-250 bytes.
          </t>
        </section>

        <section anchor="compact-encoding-base64">
          <name>Base64 Encoding</name>

          <t>
            For text-only contexts, the CBOR bytes are base64url-encoded:
          </t>

          <artwork><![CDATA[
    pop-ref:2nQAAZD1UPAgowGQA...base64url...
    ]]></artwork>

          <t>
            The "pop-ref:" prefix enables detection and parsing.
            Typical size: 200-350 characters.
          </t>
        </section>


      </section>

      <section anchor="compact-example">
        <name>Compact Reference Example</name>

        <artwork type="cbor-diag"><![CDATA[
    / Tagged Compact Evidence Reference (0x50505021 = "PPP!") /
    1347571281({
      1: h'550e8400e29b41d4a716446655440000',  / packet-id /
      2: {                                      / chain-hash /
        1: 1,
        2: h'a7ffc6f8bf1ed76651c14756a061d662
              f580ff4de43b49fa82d80a4b80f8434a'
      },
      3: {                                      / document-hash /
        1: 1,
        2: h'e3b0c44298fc1c149afbf4c8996fb924
              27ae41e4649b934ca495991b7852b855'
      },
      4: {                                      / compact-summary /
        1: 47,                                  / checkpoints /
        2: 12500,                               / chars /
        3: 5400.0,                              / VDF time: 90 min /
        4: 2,                                   / tier: Standard /
        5: 2,                                   / verdict: manual-composition-likely /
        6: 0.87                                 / confidence /
      },
      5: "https://evidence.example.com/p/"\
         "550e8400e29b41d4a716446655440000.pop",
      6: h'D28441A0A201260442...',              / compact-signature /
      7: {                                      / metadata /
        1: "Jane Author",
        2: 1(1706745600),                       / created /
        3: "WritersLogic Verification Service",
        4: 1(1706832000)                        / verified /
      }
    })
    ]]></artwork>

        <t>
          Encoded size: approximately 220 bytes (CBOR), 295 characters
          (base64url).
        </t>
      </section>
  </section>

    <!-- Section: Implementation Status (RFC 7942) -->
    <section anchor="implementation-status" removeInRFC="false">
      <name>Implementation Status</name>

      <t>
        This section records the status of known implementations of the protocol
        defined by this specification at the time of publication, and is based
        on a proposal described in <xref target="RFC7942"/>. The description of
        implementations in this section is intended to assist the IETF in its
        decision processes in progressing drafts to RFCs. Please note that the
        listing of any individual implementation here does not imply endorsement
        by the IETF. Furthermore, no effort has been spent to verify the
        information presented here that was supplied by IETF contributors.
        This is not intended as, and must not be construed to be, a catalog of
        available implementations or their features. Readers are advised to note
        that other implementations may exist.
      </t>

      <t>
        According to <xref target="RFC7942"/>, "this will allow reviewers and
        working groups to assign due consideration to documents that have the
        benefit of running code, which may serve as evidence of valuable
        experimentation and feedback that have made the implemented protocols
        more mature. It is up to the individual working groups to use this
        information as they see fit."
      </t>

      <section anchor="impl-witnessd-core">
        <name>witnessd-core (Reference Implementation)</name>

        <dl>
          <dt>Organization:</dt>
          <dd>Writerslogic Inc</dd>

          <dt>Implementation Name:</dt>
          <dd>witnessd-core</dd>

          <dt>Implementation URL:</dt>
          <dd>https://github.com/writerslogic/witnessd</dd>

          <dt>Description:</dt>
          <dd>
            Rust library implementing the complete PPPP specification including
            checkpoint generation, VDF computation (Wesolowski construction),
            HMAC Jitter Seal entropy binding, hash chain construction, COSE
            signatures, and CBOR encoding. Supports all three evidence tiers
            (software-only, attested, hardware-bound) with TPM 2.0 and Secure
            Enclave integration.
          </dd>

          <dt>Maturity Level:</dt>
          <dd>Production</dd>

          <dt>Coverage:</dt>
          <dd>
            Full specification coverage including: checkpoint chain construction,
            VDF temporal proofs, jitter entropy binding, absence claims,
            forgery cost bounds, continuation tokens, salt modes, and all
            profile levels (core, enhanced, maximum).
          </dd>

          <dt>Version Compatibility:</dt>
          <dd>Schema version 1.6.0</dd>

          <dt>Licensing:</dt>
          <dd>Apache-2.0</dd>

          <dt>Contact:</dt>
          <dd>David Condrey (david@writerslogic.com)</dd>
        </dl>
      </section>

      <section anchor="impl-witnessd-cli">
        <name>witnessd-cli</name>

        <dl>
          <dt>Organization:</dt>
          <dd>Writerslogic Inc</dd>

          <dt>Implementation Name:</dt>
          <dd>witnessd-cli</dd>

          <dt>Implementation URL:</dt>
          <dd>https://github.com/writerslogic/witnessd</dd>

          <dt>Description:</dt>
          <dd>
            Command-line interface built on witnessd-core providing evidence
            generation, verification, and inspection capabilities. Supports
            batch processing, JSON output, and integration with build systems
            and CI/CD pipelines.
          </dd>

          <dt>Maturity Level:</dt>
          <dd>Production</dd>

          <dt>Coverage:</dt>
          <dd>
            Complete Evidence Packet (.pop) generation and Attestation
            Result (.war) verification.
          </dd>

          <dt>Licensing:</dt>
          <dd>Apache-2.0</dd>

          <dt>Contact:</dt>
          <dd>David Condrey (david@writerslogic.com)</dd>
        </dl>
      </section>

      <section anchor="impl-macos-app">
        <name>Witnessd for macOS</name>

        <dl>
          <dt>Organization:</dt>
          <dd>Writerslogic Inc</dd>

          <dt>Implementation Name:</dt>
          <dd>Witnessd for macOS</dd>

          <dt>Description:</dt>
          <dd>
            Native macOS desktop application providing real-time evidence
            generation during document editing. Integrates with Secure Enclave
            for hardware-bound key storage and attestation (Tier T3).
          </dd>

          <dt>Maturity Level:</dt>
          <dd>Production</dd>

          <dt>Coverage:</dt>
          <dd>
            Full evidence generation with Secure Enclave integration,
            automatic checkpoint creation, and evidence export.
          </dd>

          <dt>Platform:</dt>
          <dd>macOS 12.0+ (Apple Silicon and Intel)</dd>

          <dt>Contact:</dt>
          <dd>David Condrey (david@writerslogic.com)</dd>
        </dl>
      </section>

      <section anchor="impl-windows-app">
        <name>Witnessd for Windows</name>

        <dl>
          <dt>Organization:</dt>
          <dd>Writerslogic Inc</dd>

          <dt>Implementation Name:</dt>
          <dd>Witnessd for Windows</dd>

          <dt>Description:</dt>
          <dd>
            Native Windows desktop application providing real-time evidence
            generation during document editing. Integrates with TPM 2.0 for
            hardware-bound key storage and attestation (Tier T3).
          </dd>

          <dt>Maturity Level:</dt>
          <dd>Production</dd>

          <dt>Coverage:</dt>
          <dd>
            Full evidence generation with TPM 2.0 integration,
            automatic checkpoint creation, and evidence export.
          </dd>

          <dt>Platform:</dt>
          <dd>Windows 10/11</dd>

          <dt>Contact:</dt>
          <dd>David Condrey (david@writerslogic.com)</dd>
        </dl>
      </section>

      <section anchor="impl-online-verifier">
        <name>WritersLogic Online Verifier</name>

        <dl>
          <dt>Organization:</dt>
          <dd>Writerslogic Inc</dd>

          <dt>Implementation Name:</dt>
          <dd>WritersLogic Verification Service</dd>

          <dt>Implementation URL:</dt>
          <dd>https://writerslogic.com/verify</dd>

          <dt>Description:</dt>
          <dd>
            Web-based independent Verifier implementation for Attestation
            Result generation. Accepts Evidence Packets (.pop), performs
            complete verification including chain integrity, VDF validation,
            entropy threshold checks, and optional external anchor
            verification, then produces Attestation Results (.war).
          </dd>

          <dt>Maturity Level:</dt>
          <dd>Production</dd>

          <dt>Coverage:</dt>
          <dd>
            Full Verifier role implementation per RATS architecture including
            all verification checks defined in this specification.
          </dd>

          <dt>Contact:</dt>
          <dd>David Condrey (david@writerslogic.com)</dd>
        </dl>
      </section>

      <section anchor="impl-interoperability">
        <name>Interoperability Testing</name>

        <t>
          The implementations listed above have been tested for interoperability.
          Evidence Packets generated by witnessd-core, witnessd-cli, the macOS
          application, and the Windows application are successfully verified by
          the WritersLogic Online Verifier, demonstrating cross-implementation
          compatibility.
        </t>

        <t>
          Test vectors from <xref target="I-D.condrey-rats-pop-examples"/>
          have been validated against all implementations.
        </t>
      </section>
    </section>


    <!-- Section 13: Security Considerations -->
    <section anchor="security-considerations"
        >
      <name>Security Considerations</name>

      <t>
        This section consolidates security analysis for the witnessd Proof of Process specification within the RATS <xref target="RFC9334"/> architecture, referencing and extending the per-section security considerations defined in <xref target="jitter-security"/> for HMAC Jitter Seal entropy, <xref target="vdf-security"/> for VDF chain temporal guarantees, <xref target="absence-security"/> for gap detection in segment-based Merkle trees, <xref target="fcb-security"/> for forgery cost bound quantification, and <xref target="evidence-model-security"/> for overall CBOR evidence integrity with COSE signatures.
      </t>

      <t>
        The specification adopts a quantified security approach consistent with the RATS philosophy: rather than claiming evidence is "secure" or "insecure" in absolute terms, security is expressed as cost asymmetries in VDF recomputation, entropy prediction barriers in HMAC-SHA256 Jitter Seals, and tamper-evidence properties through SHA-256 hash chains with COSE signatures. This framing reflects the fundamental reality that sufficiently resourced adversaries can eventually forge any evidence; the goal is to make forgery economically irrational for most scenarios by ensuring that the VDF time cost, HMAC entropy prediction cost, and computational resource cost exceed the potential gain from successful forgery.
      </t>

      <section anchor="research-limitations">
        <name>Research Limitations and Assumptions</name>
        <t>
          The "Biology" invariant relies on psycholinguistic correlations (e.g.,
          Cognitive Load Delays) that require further large-scale empirical
          validation across diverse modern input methods (e.g., mobile autocomplete).
          The "Pink Noise" metric assumes a simplified motor control model.
          Repeated evidence generation by the same author may enable cross-session
          timing analysis, which is mitigated through periodic key rotation and
          timing value clipping.
        </t>
      </section>

      <section anchor="threat-model">
        <name>Threat Model</name>

        <t>
          The witnessd threat model within the RATS architecture defines three categories relevant to VDF chain security, HMAC entropy commitment integrity, and SHA-256 segment chain tamper-evidence: adversary goals describing what attacks the protocol defends against, assumed adversary capabilities bounding the resources available to attackers, and explicitly out-of-scope adversaries whose capabilities exceed the design assumptions of this CBOR evidence format with COSE signatures.
        </t>

        <section anchor="adversary-goals">
          <name>Adversary Goals</name>

          <t>
            The protocol defends against adversaries pursuing five primary goals:
            (1) Backdating: creating evidence that falsely claims earlier creation time;
            (2) Fabrication: generating evidence for documents never genuinely authored;
            (3) Transplanting: associating legitimate evidence with different content;
            (4) Omission: selectively removing checkpoints from an evidence chain;
            (5) Impersonation: attributing evidence to a different device or author.
            Each goal is addressed through specific cryptographic and structural
            properties: VDF sequential computation prevents backdating, jitter entropy
            prevents fabrication, content binding prevents transplanting, Merkle trees
            detect omission, and hardware attestation prevents impersonation.
          </t>
        </section>

        <section anchor="adversary-capabilities">
          <name>Assumed Adversary Capabilities</name>

          <t>
            The RATS profile specification assumes adversaries have five categories of capabilities that bound the security guarantees of VDF chains, HMAC entropy commitments, and SHA-256 checkpoint integrity. Software Control: the adversary has full control over software running on their device, including the ability to modify or replace the Attesting Environment that generates CBOR evidence, and they can intercept, modify, or fabricate any software-generated data that is not protected by TPM 2.0 <xref target="TPM2.0"/> hardware attestation or similar tamper-resistant hardware. Commodity Hardware Access: the adversary can acquire commodity computing hardware at market prices for computing SHA-256 iterations, and they may have access to cloud computing resources enabling them to rent substantial computational capacity for VDF recomputation attempts. Bounded Compute Resources: the adversary's computational resources are bounded by economic constraints quantified in the forgery cost bounds, meaning they cannot instantaneously compute arbitrarily large numbers of VDF iterations, and the wall-clock time required for sequential SHA-256 computation cannot be circumvented with additional parallel resources due to the inherent data dependency between iterations. Algorithm Knowledge: the adversary has complete knowledge of all algorithms including VDF constructions, CDDL schemas, COSE signatures, and CBOR encoding, since security does not depend on obscurity and the specification is public. Statistical Sophistication: the adversary can perform statistical analysis on Jitter Seal timing histograms and may attempt to generate synthetic behavioral data using HMAC that passes Min-Entropy (H_min) tests, though the commitment-before-observation model prevents adaptive synthesis.
          </t>
        </section>

        <section anchor="out-of-scope-adversaries">
          <name>Out-of-Scope Adversaries</name>

          <t>
            The RATS profile specification explicitly does NOT defend against five categories of adversaries whose capabilities exceed the design assumptions of VDF chains, HMAC entropy commitments, and SHA-256 checkpoint integrity. Nation-State Adversaries with HSM Compromise: adversaries capable of extracting keys from hardware security modules (TPM 2.0, Secure Enclave) through sophisticated physical attacks, side-channel analysis, or manufacturer compromise, since hardware attestation via COSE signatures assumes HSM integrity for calibration and identity binding. Cryptographic Breakthrough: adversaries with access to novel cryptanalytic techniques that break SHA-256 collision resistance, ECDSA signature security underlying COSE, or other standard cryptographic primitives used throughout the CDDL schema, since the specification relies on established cryptographic assumptions. Quantum Adversaries: adversaries with access to fault-tolerant quantum computers capable of executing Shor's algorithm (breaking RSA/ECDSA used in COSE signatures) or providing significant Grover speedups against SHA-256 preimage resistance, with post-quantum considerations noted in <xref target="vdf-post-quantum"/> but full quantum resistance not claimed. Time Travel: adversaries capable of creating CBOR evidence at one point in time and presenting it as if created earlier where external anchors via RFC 3161 or blockchain timestamps are not available or have been compromised, since external timestamp authorities are trusted for absolute time claims beyond VDF relative ordering. Coerced Authors: adversaries who coerce legitimate authors into producing evidence with valid VDF proofs and HMAC Jitter Seals under duress, since the specification documents process rather than intent or consent.
          </t>

          <t>
            The exclusion of these adversaries is not a weakness but a recognition of practical threat modeling within the RATS framework, since evidence systems appropriate for defending against nation-state actors with HSM compromise or quantum computational capabilities would impose costs and constraints (such as post-quantum COSE algorithms or hardware-isolated attestation environments) unsuitable for general authoring scenarios where the goal is making forgery economically irrational rather than theoretically impossible.
          </t>
        </section>
      </section>

      <section anchor="cryptographic-security">
        <name>Cryptographic Security</name>

        <t>
          The specification relies on established
          cryptographic primitives with
          well-understood security properties. This section documents the
          security assumptions and requirements for each
          cryptographic component.
        </t>

        <section anchor="hash-function-security">
          <name>Hash Function Security</name>

          <t>
            Hash functions are used throughout the specification for content
            binding, chain construction, entropy commitment,
            and VDF computation.
          </t>

          <dl>
            <dt>Required Properties:</dt>
            <dd>
              <ul>
                <li>
                  <t>Collision Resistance:</t>
                  <t>
                    It must be computationally infeasible to find
                    two distinct
                    inputs that produce the same hash output. This property
                    ensures that different document states produce different
                    content-hash values.
                  </t>
                </li>

                <li>
                  <t>Preimage Resistance:</t>
                  <t>
                    Given a hash output, it must be computationally
                    infeasible
                    to find any input that produces that output.
                    This property
                    prevents adversaries from constructing documents
                    that match
                    a predetermined hash.
                  </t>
                </li>

                <li>
                  <t>Second Preimage Resistance:</t>
                  <t>
                    Given an input and its hash, it must be computationally
                    infeasible to find a different input with the same hash.
                    This property prevents document substitution attacks.
                  </t>
                </li>
              </ul>
            </dd>

            <dt>Algorithm Requirements:</dt>
            <dd>
              <t>
                SHA-256 is RECOMMENDED and MUST be supported
                by all implementations.
                SHA-3-256 SHOULD be supported for algorithm
                agility. Hash functions
                with known weaknesses (MD5, SHA-1) MUST NOT be used.
              </t>
            </dd>

            <dt>Security Margin:</dt>
            <dd>
              <t>
                SHA-256 provides 128-bit security against
                collision attacks and
                256-bit security against preimage attacks under classical
                assumptions. Grover's algorithm reduces these to 85-bit and
                128-bit respectively under quantum assumptions.
                This margin is
                considered adequate for the specification's threat model.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="signature-security">
          <name>Signature Security</name>

          <t>
            Digital signatures are used for segment chain authentication,
            hardware attestation, calibration binding, and
            Attestation Result
            integrity.
          </t>

          <dl>
            <dt>COSE Algorithm Requirements:</dt>
            <dd>
              <t>
                Implementations MUST support COSE algorithm identifiers
                from the COSE registry <xref target="IANA.cose"/>:
              </t>
              <ul>
                <li>ES256 (ECDSA with P-256 and SHA-256): MUST support</li>
                <li>
                ES384 (ECDSA with P-384 and SHA-384):
                SHOULD support
                </li>
                <li>EdDSA (Ed25519): SHOULD support</li>
                <li>ML-DSA (Dilithium): REQUIRED for Maximum Tier evidence to ensure post-quantum signature security</li>
              </ul>
              <t>
                RSA-based algorithms (PS256, RS256) MAY be supported for
                compatibility with legacy systems but are not
                recommended for
                new implementations due to larger signature sizes
                and post-quantum
                vulnerability.
              </t>
            </dd>

            <dt>Key Size Requirements:</dt>
            <dd>
              <t>
                Minimum key sizes for 128-bit security:
              </t>
              <ul>
                <li>ECDSA: P-256 curve or larger</li>
                <li>EdDSA: Ed25519 or Ed448</li>
                <li>RSA: 3072 bits or larger</li>
              </ul>
            </dd>

            <dt>Signature Binding:</dt>
            <dd>
              <t>
                Signatures MUST bind to the complete payload being signed.
                Partial payload signatures (signing a subset of
                fields) create
                opportunities for field substitution attacks. The chain-mac
                field provides additional binding beyond the
                checkpoint signature.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="vdf-cryptographic-security">
          <name>VDF Security</name>

          <t>
            Verifiable Delay Functions provide the temporal
            security foundation
            of the specification. VDF security rests on the
            sequential computation
            requirement.
          </t>

          <dl>
            <dt>Sequential Computation:</dt>
            <dd>
              <t>
                The VDF output cannot be computed significantly
                faster than the
                specified number of sequential operations. For iterated hash
                VDFs, this reduces to the assumption that no
                algorithm computes
                H^n(x) faster than n sequential hash evaluations. No such
                algorithm is known for cryptographic hash functions.
              </t>
            </dd>

            <dt>Parallelization Resistance:</dt>
            <dd>
              <t>
                Additional computational resources (more
                processors, GPUs, ASICs)
                cannot reduce the wall-clock time required for
                VDF computation.
                The iterated hash construction is inherently
                sequential: each
                iteration depends on the previous output.
              </t>
              <t>
                See <xref target="vdf-parallelization"/> for
                detailed analysis.
              </t>
            </dd>

            <dt>Verification Soundness:</dt>
            <dd>
              <t>
                For iterated hash VDFs, verification is by
                recomputation. The
                Verifier executes the same computation and compares results.
                This provides perfect soundness: a claimed
                output that differs
                from the actual computation will always be detected.
              </t>
              <t>
                For succinct VDFs (Pietrzak,
                Wesolowski), verification relies on the
                cryptographic hardness of the underlying problem
                (RSA group or
                class group). Soundness is computational rather
                than perfect.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="vdf-entanglement-attacks">
          <name>VDF Entanglement Attack Vectors</name>

          <t>
            The VDF Entanglement mechanism binds each checkpoint to
            the previous VDF output, current content-hash, jitter-commitment,
            and sequence number. This section analyzes three specific
            attack vectors against this construction and documents
            their mitigations and cost bounds.
          </t>

          <section anchor="grinding-attacks">
            <name>Grinding Attacks</name>

            <t>
              A grinding attack attempts to influence the VDF output
              by iteratively selecting different jitter-commitment values
              until the resulting VDF input produces a favorable output.
            </t>

            <dl>
              <dt>Attack Mechanism:</dt>
              <dd>
                <t>
                  The attacker generates candidate raw-interval sequences,
                  computes SHA-256 for each to produce candidate
                  jitter-commitments, computes the full VDF for each
                  input, and evaluates whether the output satisfies
                  some "favorable" criterion.
                </t>
              </dd>

              <dt>Cost Bound:</dt>
              <dd>
                <t>
                  With T=2^25 sequential SHA-256 iterations (~10 seconds
                  minimum wall-clock time per VDF evaluation), grinding
                  N candidates requires N×10 seconds of sequential work.
                  Parallelization reduces wall-clock time but increases
                  hardware cost proportionally. For N=1000 candidates,
                  the attacker requires either ~2.8 hours sequential or
                  1000× hardware investment for 10-second parallel grinding.
                </t>
              </dd>

              <dt>Mitigation:</dt>
              <dd>
                <t>
                  The VDF's inherent sequential computation requirement
                  ensures grinding cost scales linearly with attempts.
                  Verifiers SHOULD treat evidence with implausibly
                  favorable VDF outputs (e.g., outputs matching specific
                  bit patterns) with increased skepticism. The economic
                  irrationality of grinding depends on the value of
                  achieving a "favorable" output being less than the
                  computational cost; for most authorship scenarios,
                  no particular VDF output provides exploitable advantage.
                </t>
              </dd>

              <dt>Residual Risk:</dt>
              <dd>
                <t>
                  If the "favorable output" criterion is loose (e.g.,
                  any output in a large set), grinding becomes more
                  feasible. Implementations SHOULD NOT rely on VDF
                  outputs having any particular statistical properties
                  beyond unpredictability.
                </t>
              </dd>
            </dl>
          </section>

          <section anchor="pre-computation-attacks">
            <name>Pre-computation Attacks</name>

            <t>
              A pre-computation attack attempts to compute VDF chains
              offline when the content-hash is predictable (e.g.,
              template documents, boilerplate), then present them as
              evidence of real-time work.
            </t>

            <dl>
              <dt>Attack Mechanism:</dt>
              <dd>
                <t>
                  For pre-computation to succeed, the attacker must know
                  VDF_output{N-1} (requiring a valid prior chain),
                  predict content-hash{N} (achievable for templates),
                  forge jitter-commitment{N} (requiring synthetic
                  behavioral data), and correctly predict sequence{N}
                  (deterministic from chain state).
                </t>
              </dd>

              <dt>Primary Defense:</dt>
              <dd>
                <t>
                  The jitter-commitment acts as a cryptographic nonce.
                  Because entropy-commitment = SHA-256(raw-intervals)
                  is computed BEFORE histogram aggregation, the attacker
                  must commit to specific interval sequences, not merely
                  plausible histogram shapes. The cardinality of valid
                  interval sequences vastly exceeds histogram space,
                  preventing pre-computation of the commitment.
                </t>
              </dd>

              <dt>Binding MAC Defense:</dt>
              <dd>
                <t>
                  The binding-mac field includes prev-tree-root, preventing
                  transplantation of jitter data from unrelated checkpoint
                  chains. An attacker cannot pre-compute jitter for
                  chain A and graft it onto chain B.
                </t>
              </dd>

              <dt>Residual Risk:</dt>
              <dd>
                <t>
                  An attacker who records legitimate typing sessions on
                  their own device can replay those intervals with
                  pre-computed content. The jitter-commitment is "real"
                  but temporally decoupled from the content work. This
                  attack requires the attacker to have produced genuine
                  behavioral data at some point; it enables temporal
                  displacement but not fabrication ex nihilo.
                  Implementations requiring stronger guarantees SHOULD
                  require external timestamp anchors (RFC 3161 or
                  blockchain) to bind evidence to absolute time.
                </t>
              </dd>
            </dl>
          </section>

          <section anchor="statistical-modeling-attacks">
            <name>Statistical Modeling Attacks</name>

            <t>
              A statistical modeling attack trains a machine learning
              model on legitimate jitter data to generate synthetic
              patterns that pass entropy validation, enabling fake
              checkpoints without real behavioral input.
            </t>

            <dl>
              <dt>Attack Mechanism:</dt>
              <dd>
                <t>
                  The attacker collects legitimate jitter histograms,
                  trains a generative model (VAE, GAN, or similar) on
                  the distribution, and samples synthetic histograms
                  matching learned statistical properties.
                </t>
              </dd>

              <dt>Primary Defense:</dt>
              <dd>
                <t>
                  The commitment-before-observation model is the critical
                  defense. Because entropy-commitment = SHA-256(raw-intervals)
                  is computed from raw intervals, not histogram buckets,
                  the attacker must generate plausible raw interval
                  sequences. The space of valid interval sequences
                  (millisecond-precision timings across hundreds or
                  thousands of events) is orders of magnitude larger
                  than the histogram summary space. Training a generative
                  model on histograms provides no information about
                  which specific interval sequences produced those
                  histograms.
                </t>
              </dd>

              <dt>Entropy Validation:</dt>
              <dd>
                <t>
                  Verifiers compute Min-Entropy (H_min) on the declared
                  histogram. Synthetic histograms that are "too uniform"
                  (high entropy) or "too concentrated" (low entropy)
                  fail validation. Hurst exponent analysis (valid range
                  H ∈ [0.55, 0.85]) further distinguishes genuine behavioral
                  data exhibiting long-range temporal dependence from
                  synthetic generation attempts.
                </t>
              </dd>

              <dt>Residual Risk:</dt>
              <dd>
                <t>
                  An attacker with access to large corpora of raw interval
                  sequences (not just histograms) could train a generative
                  model on intervals directly. Timing value clipping bounds
                  information leakage; however, an attacker observing many
                  histograms from the same source could infer distributional
                  properties. Implementations requiring defense against
                  well-resourced statistical attackers SHOULD require
                  hardware attestation (T3/T4 tiers) binding jitter capture
                  to trusted execution environments where the raw intervals
                  cannot be intercepted pre-commitment.
                </t>
              </dd>
            </dl>
          </section>

          <section anchor="combined-attack-cost">
            <name>Combined Attack Cost Analysis</name>

            <t>
              An adversary attempting to forge evidence must overcome
              multiple independent barriers simultaneously:
            </t>

            <ul>
              <li>
                <t>
                  VDF recomputation: ~10 seconds wall-clock minimum
                  per checkpoint, non-parallelizable
                </t>
              </li>
              <li>
                <t>
                  Jitter synthesis: Must generate raw intervals (not
                  histograms) that pass entropy validation and match
                  behavioral plausibility tests
                </t>
              </li>
              <li>
                <t>
                  Chain binding: Must possess valid previous VDF output
                  and tree root, preventing ex nihilo fabrication
                </t>
              </li>
              <li>
                <t>
                  Temporal binding: External anchors (when present)
                  constrain absolute timing claims
                </t>
              </li>
            </ul>

            <t>
              The composition of these barriers means that practical
              forgery requires either (a) legitimate prior chain access
              plus VDF computation time plus synthetic but plausible
              jitter, or (b) compromise of the Attesting Environment
              itself. Cost-asymmetry is maintained: generating genuine
              evidence requires only normal authoring activity, while
              forgery requires computational investment, behavioral
              data acquisition, and chain access.
            </t>
          </section>
        </section>

        <section anchor="key-management-security">
          <name>Key Management</name>

          <t>
            Proper key management is essential for
            maintaining evidence integrity.
          </t>

          <dl>
            <dt>Hardware-Bound Keys:</dt>
            <dd>
              <t>
                When available, signing keys SHOULD be bound to
                hardware security
                modules (TPM, Secure Enclave). Hardware binding provides:
              </t>
              <ul>
                <li>
                  Key non-exportability: Private keys cannot
                  be extracted from
                  the device
                </li>
                <li>
                  Device binding: Evidence can be tied to a
                  specific physical
                  device
                </li>
                <li>
                  Tamper resistance: Key compromise requires physical attack
                </li>
              </ul>
            </dd>

            <dt>Session Keys:</dt>
            <dd>
              <t>
                The checkpoint-chain-key used for chain-mac
                computation SHOULD
                be derived uniquely for each session. Key
                derivation SHOULD use
                HKDF (RFC 5869) with domain separation:
              </t>
              <artwork><![CDATA[
    # Root Credential (from Enrollment)
    # RC = HKDF-SHA256(PUF_Seed, "witnessd-root-v1")
    #
    # Session Key (derived from Root Credential)
    # SK = HKDF-SHA256(RC, session-nonce || "witnessd-session-v1")
    #
    # Checkpoint Chain Key (derived from Session Key)
    # CCK = HKDF-SHA256(SK, "witnessd-chain-v1")

    chain-key = HKDF-SHA256(
        salt = session-entropy,
        ikm = device-master-key,
        info = "witnessd-chain-v1" || session-id
    )
    ]]></artwork>
            </dd>

            <dt>Key Rotation:</dt>
            <dd>
              <t>
                Device keys SHOULD be rotated periodically
                (RECOMMENDED: annually)
                or upon suspected compromise. Evidence packets created with
                revoked keys SHOULD be flagged during verification.
              </t>
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="attesting-environment-trust">
        <name>Attesting Environment Trust</name>

        <t>
          The Attesting Environment (AE) is the
          witnessd-core software running
          on the author's device. Understanding what the AE is
          trusted for, and
          what it is NOT trusted for, is essential for
          correct interpretation
          of evidence.
        </t>

        <section anchor="ae-trust-scope">
          <name>What the AE Is Trusted For</name>

          <t>
            The AE is trusted to perform accurate observation
            and honest reporting
            of the specific data it captures:
          </t>

          <dl>
            <dt>Accurate Timing Measurement:</dt>
            <dd>
              <t>
                The AE is trusted to accurately measure
                inter-keystroke intervals
                and other timing data. This does not require
                trusting the content
                of keystrokes, only the timing between events.
              </t>
            </dd>

            <dt>Correct Hash Computation:</dt>
            <dd>
              <t>
                The AE is trusted to correctly compute
                cryptographic hashes of
                document content. Verification can detect
                incorrect hashes, but
                cannot detect if the AE computed a hash of different content
                than claimed.
              </t>
            </dd>

            <dt>VDF Execution:</dt>
            <dd>
              <t>
                The AE is trusted to actually execute VDF
                iterations rather than
                fabricating outputs. This trust is partially verifiable: VDF
                outputs can be recomputed, but the claimed timing cannot be
                independently verified without calibration attestation.
              </t>
            </dd>

            <dt>Monitoring Events (for monitoring-dependent claims):</dt>
            <dd>
              <t>
                For claims in the monitoring-dependent category
                (types 16-63),
                the AE is trusted to have actually observed and reported the
                events (or non-events) it claims. This trust is
                documented in
                the ae-trust-basis field.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="ae-trust-limitations">
          <name>What the AE Is NOT Trusted For</name>

          <t>
            The specification explicitly does NOT rely on AE trust for the
            following:
          </t>

          <dl>
            <dt>Content Judgment:</dt>
            <dd>
              <t>
                The AE makes no claims about document quality, originality,
                accuracy, or appropriateness. Evidence documents
                process, not
                content merit.
              </t>
            </dd>

            <dt>Intent Inference:</dt>
            <dd>
              <t>
                The AE makes no claims about why the author
                performed specific
                actions, what the author was thinking, or whether the author
                intended to deceive. Evidence documents observable behavior,
                not mental states.
              </t>
            </dd>

            <dt>Authorship Attribution:</dt>
            <dd>
              <t>
                The AE makes no claims about who was operating
                the device. The
                evidence shows that input events occurred on a
                device; it does
                not prove that a specific individual produced those events.
              </t>
            </dd>

            <dt>Cognitive Process:</dt>
            <dd>
              <t>
                Behavioral patterns consistent with human
                typing do not prove
                human cognition. An adversary could theoretically
                program input
                patterns that mimic human timing while the
                content originates
                elsewhere. The Jitter Seal makes this costly,
                not impossible.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="hardware-attestation-role">
          <name>Hardware Attestation Role</name>

          <t>
            Hardware attestation increases AE trust by binding evidence to
            verified hardware:
          </t>

          <dl>
            <dt><xref target="TPM2.0"/> (Linux, Windows):</dt>
            <dd>
              <t>
                Provides platform integrity measurement (PCRs),
                key sealing to
                platform state, and hardware-bound signing keys.
                TPM attestation
                proves that the AE was running on a specific
                device in a specific
                configuration.
              </t>
            </dd>

            <dt>Secure Enclave (macOS, iOS):</dt>
            <dd>
              <t>
                Provides hardware-bound key generation and
                signing operations.
                Keys generated in the Secure Enclave cannot be
                exported, binding
                signatures to the specific device.
              </t>
            </dd>

            <dt>Attestation Limitations:</dt>
            <dd>
              <t>
                Hardware attestation proves the signing key is
                hardware-bound;
                it does not prove the AE software is unmodified.
                Full AE integrity
                would require secure boot attestation and runtime integrity
                measurement, which are platform-specific and not universally
                available.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="compromised-ae-scenarios">
          <name>Compromised AE Scenarios</name>

          <t>
            Understanding the impact of AE compromise is essential for risk
            assessment:
          </t>

          <dl>
            <dt>Modified AE Software:</dt>
            <dd>
              <t>
                An adversary running modified AE software can fabricate any
                monitoring-dependent claims (types 16-63). Chain-verifiable
                claims (types 1-15) remain bound by VDF computational
                requirements even with modified software.
              </t>
            </dd>

            <dt>Fake Calibration:</dt>
            <dd>
              <t>
                Modified software could report artificially slow calibration
                rates, making subsequent VDF computations appear
                to take longer
                than they actually did. This attack is mitigated by:
              </t>
              <ul>
                <li>
                  Hardware-signed calibration attestation (when available)
                </li>
                <li>
                  Plausibility checks based on device class
                </li>
                <li>
                  External anchor cross-validation
                </li>
              </ul>
            </dd>

            <dt>Fabricated Jitter Data:</dt>
            <dd>
              <t>
                Modified software could generate synthetic timing data that
                mimics human patterns. The cost of this attack
                is bounded by:
              </t>
              <ul>
                <li>
                  Real-time generation requirement (VDF entanglement)
                </li>
                <li>
                  Statistical consistency across checkpoints
                </li>
                <li>
                  Entropy threshold requirements
                </li>
              </ul>
              <t>
                See <xref target="jitter-simulation-attacks"/>
                for quantified
                bounds on simulation attacks.
              </t>
            </dd>

            <dt>Mitigation Summary:</dt>
            <dd>
              <t>
                AE compromise cannot reduce the VDF
                computational requirement
                or bypass the sequential execution constraint.
                Compromise enables
                fabrication of monitoring data but does not
                eliminate the time
                cost of forgery. The forgery-cost-section
                quantifies the minimum
                resources required even with full software control.
              </t>
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="verification-security">
        <name>Verification Security</name>

        <t>
          The verification process must be secure against
          both malicious Evidence
          and malicious Verifiers.
        </t>

        <section anchor="verifier-independence">
          <name>Verifier Independence</name>

          <t>
            Evidence verification is designed to be
            independent of the Attester:
          </t>

          <dl>
            <dt>No Shared State:</dt>
            <dd>
              <t>
                Verification requires no communication with or data from the
                Attester beyond the Evidence packet itself. A Verifier with
                only the .pop file can perform complete verification.
              </t>
            </dd>

            <dt>Adversarial Verification:</dt>
            <dd>
              <t>
                A skeptical Verifier can appraise Evidence
                without trusting any
                claims made by the Attester. All cryptographic proofs are
                included and can be recomputed independently.
              </t>
            </dd>

            <dt>Multiple Independent Verifiers:</dt>
            <dd>
              <t>
                Multiple Verifiers appraising the same Evidence should reach
                consistent results for computationally-bound claims. Monitoring-
                dependent claims may receive different
                confidence assessments
                based on Verifier policies.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="sampling-strategies">
          <name>Sampling Strategies for Large Evidence Packets</name>

          <t>
            Evidence packets may contain thousands of checkpoints. Full
            verification of all VDF proofs may be impractical. Verifiers
            MAY use sampling strategies:
          </t>

          <dl>
            <dt>Boundary Verification:</dt>
            <dd>
              <t>
                Always verify the first and last checkpoints fully. This
                confirms the chain endpoints.
              </t>
            </dd>

            <dt>Random Sampling:</dt>
            <dd>
              <t>
                Randomly select checkpoints for full VDF verification. If
                any sampled checkpoint fails, reject the entire Evidence.
                Probability of detecting a single invalid checkpoint with
                k samples from n checkpoints: 1 - (1 - 1/n)^k.
              </t>
            </dd>

            <dt>Chain Linkage Verification:</dt>
            <dd>
              <t>
                Verify prev-hash linkage for ALL checkpoints
                (computationally
                cheap). This ensures no checkpoints were removed
                or reordered.
              </t>
            </dd>

            <dt>Anchor-Bounded Verification:</dt>
            <dd>
              <t>
                If external anchors are present, prioritize verification of
                checkpoints adjacent to anchors. External timestamps bound
                the timeline at anchor points.
              </t>
            </dd>

            <dt>Sampling Disclosure:</dt>
            <dd>
              <t>
                Attestation Results SHOULD disclose the sampling
                strategy used
                and the number of checkpoints fully verified.
                Relying Parties
                can assess whether the sampling provides adequate confidence
                for their use case.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="external-anchor-verification">
          <name>External Anchor Verification</name>

          <t>
            External anchors (RFC 3161 timestamps,
            blockchain proofs) provide
            absolute time binding but introduce additional
            trust requirements:
          </t>

          <dl>
            <dt>Timestamp Authority Trust:</dt>
            <dd>
              <t>
                Timestamps per RFC 3161 require trust in the Time
                Stamping Authority
                (TSA). Verifiers SHOULD use TSAs with published policies and
                audit records. Multiple TSAs MAY be used for redundancy.
              </t>
            </dd>

            <dt>Blockchain Anchor Verification:</dt>
            <dd>
              <t>
                Blockchain-based anchors require access to blockchain data
                (directly or via APIs). Verifiers SHOULD verify:
              </t>
              <ul>
                <li>
                  The transaction containing the anchor is confirmed
                </li>
                <li>
                  Sufficient confirmations for the security level required
                </li>
                <li>
                  The anchor commitment matches the expected segment data
                </li>
              </ul>
            </dd>

            <dt>Anchor Freshness:</dt>
            <dd>
              <t>
                Anchors prove that Evidence existed at the anchor time; they
                do not prove Evidence was created at that time. An adversary
                could create Evidence, wait, then obtain an anchor. This is
                mitigated by anchor coverage requirements (multiple anchors
                throughout the session).
              </t>
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="protocol-security">
        <name>Protocol Security</name>

        <t>
          This section addresses protocol-level attacks
          and mitigations, drawing
          on the per-section security analyses.
        </t>

        <section anchor="replay-attack-prevention">
          <name>Replay Attack Prevention</name>

          <t>
            Replay attacks attempt to reuse valid evidence
            components in invalid
            contexts. Multiple mechanisms prevent replay:
          </t>

          <dl>
            <dt>Nonce Binding:</dt>
            <dd>
              <t>
                Session entropy (random 256-bit seed) is
                incorporated into the
                genesis checkpoint VDF input. This prevents
                precomputation of
                VDF outputs before a session begins.
              </t>
            </dd>

            <dt>Chain Binding:</dt>
            <dd>
              <t>
                Each checkpoint includes prev-hash, binding it
                to the specific
                chain history. Checkpoints cannot be transplanted
                between chains
                without invalidating the hash linkage.
              </t>
              <t>
                See <xref target="jitter-replay-attacks"/>
                for jitter-specific
                replay prevention.
              </t>
            </dd>

            <dt>Sequence Binding:</dt>
            <dd>
              <t>
                Checkpoint sequence numbers MUST be strictly
                monotonic. Duplicate
                or out-of-order sequence numbers indicate manipulation.
              </t>
            </dd>

            <dt>Content Binding:</dt>
            <dd>
              <t>
                VDF inputs incorporate content-hash, binding
                temporal proofs to
                specific document states. Evidence for one
                document cannot be
                transferred to another without VDF recomputation.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="transplant-attack-prevention">
          <name>Transplant Attack Prevention</name>

          <t>
            Transplant attacks attempt to associate legitimate evidence from
            one context with content from another context:
          </t>

          <dl>
            <dt>Content-VDF Binding:</dt>
            <dd>
              <t>
                The VDF input includes content-hash:
              </t>
              <artwork><![CDATA[
    VDF_input{N} = H(
        VDF_output{N-1} ||
        content-hash{N} ||
        jitter-commitment{N} ||
        sequence{N}
    )
    ]]></artwork>
              <t>
                Changing the document content requires
                recomputing all subsequent
                VDF proofs.
              </t>
            </dd>

            <dt>Jitter-VDF Binding:</dt>
            <dd>
              <t>
                The jitter-commitment is entangled with VDF
                input. Transplanting
                jitter data from another session is infeasible
                because it would
                require the original VDF output (which depends on different
                content) or recomputing the entire VDF chain with new jitter
            
                (which requires capturing new behavioral
                entropy in real time).
              </t>
            </dd>

            <dt>Chain MAC:</dt>
            <dd>
              <t>
                The chain-mac field HMAC-binds checkpoints to the session's
                chain-key:
              </t>
              <artwork><![CDATA[
    chain-mac = HMAC-SHA256(
        key = chain-key,
        message = tree-root || sequence || session-id
    )
    ]]></artwork>
              <t>
                Without the chain-key, an adversary cannot construct valid
                chain-mac values for transplanted checkpoints.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="backdating-attack-costs">
          <name>Backdating Attack Costs</name>

          <t>
            Backdating creates evidence claiming a process occurred earlier
            than it actually did. The cost of backdating is quantified by
            the VDF recomputation requirement:
          </t>

          <dl>
            <dt>VDF Recomputation:</dt>
            <dd>
              <t>
                To backdate evidence by inserting or modifying
                checkpoints at
                position P, the adversary must recompute all VDF proofs from
                position P forward. This requires:
              </t>
              <artwork><![CDATA[
    backdate_time >= sum(iterations[i]) / adversary_vdf_rate
                     for i = P to N
    ]]></artwork>
              <t>
                where N is the final checkpoint. Backdating by a significant
                amount (hours or days) requires proportional
                wall-clock time.
              </t>
            </dd>

            <dt>External Anchor Constraints:</dt>
            <dd>
              <t>
                If external anchors exist in the chain,
                backdating is constrained
                to the interval between anchors. An adversary
                cannot backdate
                before an anchor without also forging the
                external timestamp.
              </t>
            </dd>

            <dt>Cost Quantification:</dt>
            <dd>
              <t>
                The forgery-cost-section provides explicit cost bounds for
                backdating attacks, including compute costs, time costs, and
                economic estimates.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="omission-attack-prevention">
          <name>Omission Attack Prevention</name>

          <t>
            Omission attacks selectively remove checkpoints
            to hide unfavorable
            evidence:
          </t>

          <dl>
            <dt>Sequence Verification:</dt>
            <dd>
              <t>
                Checkpoint sequence numbers MUST be consecutive.
                Missing sequence
                numbers indicate omission. Verifiers MUST reject chains with
                non-consecutive sequences.
              </t>
            </dd>

            <dt>Hash Chain Integrity:</dt>
            <dd>
              <t>
                Removing a checkpoint breaks the hash chain (subsequent
                checkpoint's prev-hash will not match). Repairing the chain
                requires recomputing all subsequent segment hashes and
                VDF proofs.
              </t>
            </dd>

            <dt>Completeness Claims:</dt>
            <dd>
              <t>
                The checkpoint-chain-complete absence claim
                (type 6) explicitly
                asserts that no checkpoints were omitted. This claim is
                computationally-bound.
              </t>
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="operational-security">
        <name>Operational Security</name>

        <t>
          Security of the overall system depends on proper
          operational practices
          beyond the protocol specification.
        </t>

        <section anchor="key-lifecycle">
          <name>Key Lifecycle Management</name>

          <dl>
            <dt>Key Generation:</dt>
            <dd>
              <t>
                Device keys SHOULD be generated within hardware
                security modules
                when available. Software-generated keys MUST use
                cryptographically
                secure random number generators.
              </t>
            </dd>

            <dt>Key Storage:</dt>
            <dd>
              <t>
                Private keys SHOULD be stored in platform-appropriate secure
                storage:
              </t>
              <ul>
                <li>macOS: Secure Enclave or Keychain</li>
                <li>Linux: TPM or system keyring</li>
                <li>Windows: TPM or DPAPI</li>
              </ul>
              <t>
                Keys MUST NOT be stored in plaintext in the filesystem.
              </t>
            </dd>

            <dt>Key Rotation:</dt>
            <dd>
              <t>
                Organizations SHOULD establish key rotation policies.
                RECOMMENDED rotation interval: annually or upon personnel
                changes. Evidence packets created with revoked keys SHOULD
                receive reduced confidence scores.
              </t>
            </dd>

            <dt>Key Revocation:</dt>
            <dd>
              <t>
                Mechanisms for key revocation are outside the scope of this
                specification but SHOULD be considered for deployment.
                Certificate revocation lists (CRLs) or OCSP may
                be appropriate
                for managed environments.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="evidence-storage-transmission">
          <name>Evidence Packet Storage and Transmission</name>

          <dl>
            <dt>Integrity Protection:</dt>
            <dd>
              <t>
                Evidence packets are self-protecting through cryptographic
                binding. Additional encryption is not required for integrity
                but MAY be applied for confidentiality.
              </t>
            </dd>

            <dt>Confidentiality Considerations:</dt>
            <dd>
              <t>
                Evidence packets contain document hashes and
                behavioral data.
                While content is not included, statistical information about
                the authoring process is present. Transmission
                over untrusted
                networks SHOULD use TLS 1.3 or equivalent.
              </t>
            </dd>

            <dt>Archival Storage:</dt>
            <dd>
              <t>
                Evidence packets intended for long-term storage SHOULD be:
              </t>
              <ul>
                <li>
                  Stored with redundancy (multiple copies,
                  geographic distribution)
                </li>
                <li>
                  Protected against bit rot (checksums,
                  error-correcting codes)
                </li>
                <li>
                  Associated with necessary verification materials
                  (public keys,
                  anchor confirmations)
                </li>
              </ul>
            </dd>

            <dt>Retention Policies:</dt>
            <dd>
              <t>
                Organizations SHOULD establish retention policies balancing
                evidentiary value against privacy
            considerations. Jitter data
                has privacy implications; retention beyond the verification
                period may not be necessary or desirable.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="verifier-policy">
          <name>Verifier Policy Considerations</name>

          <dl>
            <dt>Minimum Requirements:</dt>
            <dd>
              <t>
                Verifiers SHOULD establish minimum requirements
                for acceptable
                Evidence:
              </t>
              <ul>
                <li>
                  Minimum evidence tier (Basic, Standard,
                  Enhanced, Maximum)
                </li>
                <li>
                  Minimum VDF duration relative to
                  claimed authoring time
                </li>
                <li>Minimum entropy threshold</li>
                <li>Required absence claims for specific use cases</li>
              </ul>
            </dd>

            <dt>Confidence Thresholds:</dt>
            <dd>
              <t>
                Verifiers SHOULD define confidence thresholds
                for acceptance:
              </t>
              <ul>
                <li>Low-stakes: confidence &gt;= 0.3 may be acceptable</li>
                <li>Standard: confidence &gt;= 0.5 typical requirement</li>
                <li>High-stakes: confidence &gt;= 0.7 recommended</li>
                <li>Litigation: confidence &gt;= 0.8 with Maximum tier</li>
              </ul>
            </dd>

            <dt>Caveat Handling:</dt>
            <dd>
              <t>
                Verifiers SHOULD define how caveats affect
                acceptance decisions.
                Some caveats may be disqualifying for specific
                use cases (e.g.,
                "no hardware attestation" may be unacceptable
                for high-stakes
                verification).
              </t>
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="limitations-nongoals">
        <name>Limitations and Non-Goals</name>

        <t>
          This section explicitly documents what the specification does NOT
          protect against and what it does NOT claim to achieve.
        </t>

        <section anchor="unprotected-attacks">
          <name>Attacks Not Protected Against</name>

          <dl>
            <dt>Collusion:</dt>
            <dd>
              <t>
                If the author and a third party collude (e.g., the author
                provides their device credentials to another
                person who types
                while the author is credited), the Evidence will show a
                legitimate-looking process. The specification documents
                observable behavior, not identity.
              </t>
            </dd>

            <dt>Pre-Prepared Content:</dt>
            <dd>
              <t>
                An author could slowly type pre-prepared content, creating
                Evidence of a gradual process for content that
                already existed.
                The specification documents that typing occurred, not that
                thinking occurred during typing.
              </t>
            </dd>

            <dt>External Input Devices:</dt>
            <dd>
              <t>
                Input from devices not monitored by the AE (e.g., hardware
                keystroke injectors, remote desktop from
                unmonitored machines)
                may not be distinguishable from local input. Hardware-level
                input verification is outside scope.
              </t>
            </dd>

            <dt>Social Engineering:</dt>
            <dd>
              <t>
                Attacks that manipulate Relying Parties into accepting
                inappropriate Evidence (e.g., convincing a reviewer that
                weak Evidence is sufficient) are outside scope.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="honest-author-assumption">
          <name>The Honest Author Assumption</name>

          <t>
            The specification fundamentally documents PROCESS, not INTENT:
          </t>

          <dl>
            <dt>Evidence Shows What Happened:</dt>
            <dd>
              <t>
                Evidence shows that input events occurred with
                specific timing
                patterns, that VDF computation required certain time, that
                document states changed in sequence. Evidence does not show
                why any of this happened.
              </t>
            </dd>

            <dt>Process != Cognition:</dt>
            <dd>
              <t>
                Evidence that an author typed content gradually
                does not prove
                the author thought of that content. The author
                could have been
                transcribing, copying from memory, or following dictation.
              </t>
            </dd>

            <dt>Behavioral Consistency:</dt>
            <dd>
              <t>
                The correct interpretation of Evidence is
                "behavioral consistency":
                the observable process was consistent with the
                claimed process.
                This is weaker than "authorship proof" but is verifiable and
                falsifiable.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="content-agnostic">
          <name>Content-Agnostic By Design</name>

          <t>
            The specification is deliberately content-agnostic:
          </t>

          <dl>
            <dt>No Semantic Analysis:</dt>
            <dd>
              <t>
                Evidence contains document hashes, not content.
                The specification
                makes no claims about what was written, only how
                it was written.
              </t>
            </dd>

            <dt>No Quality Assessment:</dt>
            <dd>
              <t>
                Evidence does not indicate whether content is
                good, original,
                accurate, or valuable. Strong Evidence can
                accompany poor content;
                excellent content can have weak Evidence.
              </t>
            </dd>

            <dt>No AI Detection:</dt>
            <dd>
              <t>
                The specification explicitly does NOT claim to
                detect whether
                content was "written by AI" or "written by a human" in terms
                of content origin. It documents the observable
                INPUT process,
                which is distinct from content generation.
              </t>
            </dd>

            <dt>Privacy Benefit:</dt>
            <dd>
              <t>
                Content-agnosticism is a privacy feature. Evidence can be
                verified without accessing the document content, enabling
                verification of confidential documents.
              </t>
            </dd>
          </dl>
        </section>
      </section>

      <section anchor="comparison-related-work">
        <name>Comparison to Related Work</name>

        <t>
          This section compares the security model of
          witnessd Proof of Process
          to related attestation and timestamping systems.
        </t>

        <section anchor="comparison-timestamping">
          <name>Comparison to Traditional Timestamping</name>

          <t>
            Traditional timestamping (RFC 3161) proves that
            a document existed
            at a point in time. Proof of Process provides
            additional properties:
          </t>

          <table>
            <thead>
              <tr>
                <th>Property</th>
                <th>RFC 3161</th>
                <th>Proof of Process</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>Existence proof</td>
                <td>Yes (point in time)</td>
                <td>Yes (continuous)</td>
              </tr>
              <tr>
                <td>Process documentation</td>
                <td>No</td>
                <td>Yes</td>
              </tr>
              <tr>
                <td>Behavioral evidence</td>
                <td>No</td>
                <td>Yes (jitter)</td>
              </tr>
              <tr>
                <td>Temporal ordering</td>
                <td>No (independent timestamps)</td>
                <td>Yes (VDF chain)</td>
              </tr>
              <tr>
                <td>Third-party trust</td>
                <td>Required (TSA)</td>
                <td>Optional (anchors)</td>
              </tr>
              <tr>
                <td>Local generation</td>
                <td>No (requires TSA interaction)</td>
                <td>Yes</td>
              </tr>
            </tbody>
          </table>

          <t>
            Proof of Process is complementary to timestamping.
            External anchors
            (including RFC 3161 timestamps) provide absolute
            time binding that
            strengthens VDF-based relative ordering.
          </t>
        </section>

        <section anchor="comparison-code-signing">
          <name>Comparison to Code Signing</name>

          <t>
            Code signing attests to the identity of the
            signer and integrity of
            the signed artifact. Proof of Process serves different goals:
          </t>

          <table>
            <thead>
              <tr>
                <th>Property</th>
                <th>Code Signing</th>
                <th>Proof of Process</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>Identity binding</td>
                <td>Strong (PKI)</td>
                <td>Weak (device-bound)</td>
              </tr>
              <tr>
                <td>Artifact integrity</td>
                <td>Yes</td>
                <td>Yes (hash binding)</td>
              </tr>
              <tr>
                <td>Creation process</td>
                <td>No</td>
                <td>Yes</td>
              </tr>
              <tr>
                <td>Temporal properties</td>
                <td>Timestamp only</td>
                <td>Duration, ordering</td>
              </tr>
              <tr>
                <td>Use case</td>
                <td>Software distribution</td>
                <td>Authoring documentation</td>
              </tr>
            </tbody>
          </table>

          <t>
            Code signing establishes "who signed this"; Proof of Process
            establishes "how this was created." The two could be combined
            for comprehensive provenance documentation.
          </t>
        </section>

        <section anchor="comparison-rats">
          <name>Relationship to RATS Security Model</name>

          <t>
            Proof of Process implements an application-specific
            profile of the
            RATS architecture. Key security model
            alignments:
          </t>

          <dl>
            <dt>Evidence vs. Attestation Results:</dt>
            <dd>
              <t>
                The separation between .pop (Evidence) and .war (Attestation
                Result) files follows the RATS distinction. Evidence is
                produced by the Attester; Attestation Results
                by the Verifier.
              </t>
            </dd>

            <dt>Appraisal Policy:</dt>
            <dd>
              <t>
                RATS defines Appraisal Policy for Evidence as the Verifier's
                rules for evaluating Evidence. The absence-claim thresholds
                and confidence-level requirements serve this role in Proof
                of Process.
              </t>
            </dd>

            <dt>Background Check vs. Passport Model:</dt>
            <dd>
              <t>
                Proof of Process supports both RATS models.
                The "passport model"
                applies when the author obtains a .war file and
                presents it to
                Relying Parties. The "background check model"
                applies when the
                Relying Party verifies the .pop file directly or through a
                trusted Verifier.
              </t>
            </dd>

            <dt>Freshness:</dt>
            <dd>
              <t>
                RATS freshness mechanisms (nonces, timestamps) align with
                the session-entropy and external-anchor mechanisms in Proof
                of Process. VDF proofs provide an additional freshness
                dimension: evidence of elapsed time.
              </t>
            </dd>

            <dt>Endorsements and Reference Values:</dt>
            <dd>
              <t>
                Hardware attestation in the hardware-section corresponds to
                RATS Endorsements. Calibration data serves as
                Reference Values
                for VDF timing verification.
              </t>
            </dd>
          </dl>

          <t>
            For RATS-specific security guidance, implementers should also
            consult the RATS security considerations in
            RFC 9334 Section 11.
          </t>
        </section>
      </section>

      <section anchor="zk-forensic-verdict">
        <name>Process Score Construction</name>
        <t>
          The Verifier evaluates Evidence across three dimensions, each
          producing a component score in the range [0.0, 1.0]:
        </t>
        <ol>
          <li>Residency (R): Strength of hardware binding, from software-only
          (0.0-0.7) through TPM attestation (0.7-0.9) to TEE-captured input
          events (0.9-1.0).</li>
          <li>Sequence (S): VDF chain integrity and temporal plausibility,
          including monotonic ordering, calibration consistency, and external
          anchor corroboration.</li>
          <li>Behavioral Consistency (B): Whether the behavioral metrics in
          the evidence chain reflect a consistent generative process, derived
          from spectral analysis, edit operation distributions, and temporal
          evolution of per-checkpoint measurements.</li>
        </ol>
        <t>
          The Process Score combines these components:
        </t>
        <artwork><![CDATA[
    PS = w_R * R + w_S * S + w_B * B

    Default weights: w_R = 0.3, w_S = 0.3, w_B = 0.4
    Verifier-configurable; weights MUST sum to 1.0
    ]]></artwork>
        <t>
          The Process Score is a measurement of evidence chain strength.
          It does not classify content origin, determine authorship identity,
          or render a verdict. Verifiers include the Process Score in the
          Attestation Result; Relying Parties apply their own acceptance
          thresholds.
        </t>

        <t>
          Evidence satisfying source consistency constraints provides high-confidence
          assessment. The Process Score reflects the strength of the evidence chain,
          not a verdict on authorship. Relying Parties apply domain-specific policies
          to determine what Process Score is acceptable for their use case.
        </t>

        <section anchor="zk-source-consistency">
          <name>Source Consistency Verification</name>
          <t>
            When ZK proof mechanisms are employed (T3-T4), the proof attests
            to the following properties without exporting behavioral data:
          </t>
          <artwork><![CDATA[
    "The evidence chain exhibits:
     (1) unbroken VDF temporal ordering across all checkpoints,
     (2) valid entropy commitments bound to content hashes,
     (3) behavioral metrics consistent with interactive editing, and
     (4) no source consistency transitions exceeding threshold."
    ]]></artwork>
          <t>
            The ZK proof allows a Verifier to confirm these properties
            without access to the underlying timing data, preserving author
            privacy while enabling high-confidence source consistency
            evaluation.
          </t>
        </section>
      </section>

      <section anchor="security-summary">
        <name>Security Properties Summary</name>

        <t>
          This section summarizes the security properties provided by the
          specification:
        </t>

        <section anchor="properties-provided">
          <name>Properties Provided</name>

          <dl>
            <dt>Tamper-Evidence:</dt>
            <dd>
              <t>
                Modifications to Evidence packets are detectable through
                cryptographic verification. The hash chain,
                VDF entanglement,
                and MAC bindings ensure that alteration
                invalidates the Evidence.
              </t>
            </dd>

            <dt>Cost-Asymmetric Forgery:</dt>
            <dd>
              <t>
                Producing counterfeit Evidence requires resources
                (time, compute,
                entropy generation) disproportionate to legitimate Evidence
                creation. The forgery-cost-section quantifies
                these requirements.
              </t>
            </dd>

            <dt>Independent Verifiability:</dt>
            <dd>
              <t>
                Evidence can be verified by any party without access to the
                original device, without trust in the Attester's
                infrastructure,
                and without network connectivity (except for
                external anchors).
              </t>
            </dd>

            <dt>Privacy by Construction:</dt>
            <dd>
              <t>
                Document content is never stored in Evidence.
                Behavioral data
                is aggregated before inclusion. The specification enforces
                privacy through structural constraints, not policy.
              </t>
            </dd>

            <dt>Temporal Ordering:</dt>
            <dd>
              <t>
                VDF chain construction provides tamper-evident
                relative ordering of checkpoints with forgery costs
                bounded by VDF recomputation time. External anchors
                provide absolute time binding.
              </t>
            </dd>

            <dt>Behavioral Binding:</dt>
            <dd>
              <t>
                Jitter Seal entanglement binds captured
                behavioral entropy to
                the segment chain, making Evidence
                transplantation infeasible.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="properties-not-provided">
          <name>Properties NOT Provided</name>

          <dl>
            <dt>Tamper-Proof:</dt>
            <dd>
              <t>
                Evidence CAN be forged given sufficient resources. The
                specification makes forgery costly, not impossible.
              </t>
            </dd>

            <dt>Identity Proof:</dt>
            <dd>
              <t>
                Evidence does NOT prove who operated the device. It proves
                that input events occurred on a device, not that a specific
                person produced them.
              </t>
            </dd>

            <dt>Intent Proof:</dt>
            <dd>
              <t>
                Evidence does NOT prove why actions occurred. Observable
                behavior is documented; mental states are not.
              </t>
            </dd>

            <dt>Content Origin Proof:</dt>
            <dd>
              <t>
                Evidence does NOT prove where ideas came from. The input
                process is documented; the cognitive source is not.
              </t>
            </dd>

            <dt>Absolute Certainty:</dt>
            <dd>
              <t>
                All security properties are bounded by explicit assumptions.
                No claim is made to be absolute, irrefutable, or guaranteed.
              </t>
            </dd>
          </dl>
        </section>
      </section>
  </section>


    <!-- Section 8: Privacy Considerations -->
    <section anchor="privacy-considerations"
        >
      <name>Privacy Considerations</name>

      <t>
        This section consolidates privacy analysis for the witnessd Proof of
        Process specification. It references and extends
        the per-section privacy
        considerations defined in <xref target="jitter-privacy"/>,
        <xref target="absence-privacy"/>, and
        <xref target="privacy-construction"/>.
      </t>

      <t>
        Privacy is a core design goal of this
        specification, not an afterthought.
        The protocol implements privacy-by-construction:
        structural constraints
        that make privacy violations architecturally impossible, rather than
        relying on policy or trust. This approach follows the guidance of
        <xref target="RFC6973"/>
        (Privacy Considerations for Internet Protocols).
      </t>

      <section anchor="privacy-design-principles">
        <name>Privacy by Construction</name>

        <t>
          The witnessd evidence model enforces privacy through architectural
          constraints that cannot be circumvented without fundamentally
          modifying the protocol.
        </t>

        <section anchor="no-content-storage">
          <name>No Document Content Storage</name>

          <t>
            Evidence packets contain cryptographic hashes of
            document states,
            never the document content itself. This is a
            structural invariant:
          </t>

          <ul>
            <li>
              <t>Content Hash Binding:</t>
              <t>
                The document-ref structure (CDDL key 5 in evidence-packet)
                contains only a hash-value of the final document
                content, the
                byte-length, and character count. The content
                itself is never
                included in the Evidence packet.
              </t>
            </li>

            <li>
              <t>Checkpoint Content Hashes:</t>
              <t>
                Each checkpoint (key 4: content-hash) contains a hash of the
                document state at that point. An adversary with the Evidence
                packet but not the document cannot recover
                content from these
                hashes.
              </t>
            </li>

            <li>
              <t>Edit Deltas Without Content:</t>
              <t>
                The edit-delta structure (key 7 in checkpoint) records
                chars-added, chars-deleted, insertions, deletions, and
                replacements as counts only. No information about what
                characters were added or deleted is included.
              </t>
            </li>
          </ul>

          <t>
            This design enables verification of process
            without revealing what
            was written, supporting confidential document
            workflows where the
            evidence must be verifiable but the content must remain private.
          </t>
        </section>

        <section anchor="no-keystroke-capture">
          <name>No Keystroke Capture</name>

          <t>
            The specification captures inter-event timing intervals without
            recording which keys were pressed:
          </t>

          <ul>
            <li>
              <t>Timing-Only Measurement:</t>
              <t>
                Jitter-binding captures millisecond intervals between input
                events. The interval "127ms" carries no information about
                whether the interval was between 'a' and 'b' or between 'x'
                and 'y'.
              </t>
            </li>

            <li>
              <t>No Character Mapping:</t>
              <t>
                Timing intervals are stored in observation order without
                any association to specific characters, words, or semantic
                content.
              </t>
            </li>

            <li>
              <t>No Keyboard Event Codes:</t>
              <t>
                Scan codes, virtual key codes, and other
                keyboard identifiers
                are not recorded. The specification treats all input events
                uniformly as timing sources.
              </t>
            </li>
          </ul>

          <t>
            This architecture ensures that even with complete access to an
            Evidence packet, no information about what was typed can be
            reconstructed.
          </t>
        </section>

        <section anchor="no-screen-capture">
          <name>No Screenshots or Screen Recording</name>

          <t>
            The specification explicitly excludes visual capture mechanisms:
          </t>

          <ul>
            <li>
              No screenshot capture at checkpoints or any other time
            </li>
            <li>
              No screen recording or video capture
            </li>
            <li>
              No window title or application name logging
            </li>
            <li>
              No clipboard content capture (only timing of clipboard events
              for monitoring-dependent absence claims, and
              only event counts,
              not content)
            </li>
          </ul>

          <t>
            Visual content capture would fundamentally violate the
            content-agnostic design and is architecturally excluded.
          </t>
        </section>

        <section anchor="local-generation">
          <name>Local Evidence Generation</name>

          <t>
            Evidence is generated entirely on the Attester device with no
            network dependency:
          </t>

          <ul>
            <li>
              <t>No Telemetry:</t>
              <t>
                The Attesting Environment does not transmit telemetry,
                analytics, or any behavioral data to external services.
              </t>
            </li>

            <li>
              <t>No Cloud Processing:</t>
              <t>
                All cryptographic computations (hashing, VDF, signatures)
                occur locally. No document content or behavioral data is
                sent to cloud services for processing.
              </t>
            </li>

            <li>
              <t>Optional External Anchors:</t>
              <t>
                The only network communication is optional: external anchors
                (RFC 3161 <xref target="RFC3161"/>, <xref target="OpenTimestamps"/>, blockchain) transmit only
                cryptographic hashes, never document content or behavioral
                data.
              </t>
            </li>
          </ul>

          <t>
            Users can generate and verify Evidence in fully air-gapped
            environments. External anchors enhance evidence strength but
            are not required.
          </t>
        </section>
      </section>

      <section anchor="data-minimization">
        <name>Data Minimization</name>

        <t>
          Following RFC 6973 Section 6.1, the specification
          minimizes data collection to what is strictly
          necessary for evidence
          generation and verification.
        </t>

        <section anchor="data-collected">
          <name>Data Collected</name>

          <t>
            The following data IS collected and included in
            Evidence packets:
          </t>

          <dl>
            <dt>Timing Histograms:</dt>
            <dd>
              <t>
                Inter-event timing intervals aggregated into
                histogram buckets
                (jitter-summary, key 3 in jitter-binding). Bucket boundaries
                are coarse (RECOMMENDED: 0, 50, 100, 200, 500, 1000, 2000,
                5000ms) to prevent precise interval reconstruction.
              </t>
            </dd>

            <dt>Edit Statistics:</dt>
            <dd>
              <t>
                Character counts for additions, deletions, and
                edit operations
                (edit-delta structure). These are aggregate counts, not
                positional data.
              </t>
            </dd>

            <dt>Checkpoint Hashes:</dt>
            <dd>
              <t>
                Cryptographic hashes of document states at each checkpoint.
                One-way functions; content cannot be recovered.
              </t>
            </dd>

            <dt>VDF Proofs:</dt>
            <dd>
              <t>
                Verifiable Delay Function outputs proving minimum elapsed
                time. These are computational proofs, not behavioral data.
              </t>
            </dd>

            <dt>Optional: Raw Timing Intervals:</dt>
            <dd>
              <t>
                The raw-intervals field (key 5 in jitter-binding) MAY be
                included for enhanced verification. This is OPTIONAL and
                user-controlled. When omitted, only histogram aggregates
                are included.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="data-not-collected">
          <name>Data NOT Collected</name>

          <t>
            The following data is explicitly NOT collected:
          </t>

          <ul>
            <li>
              Document content (text, images, formatting)
            </li>
            <li>
              Individual characters or words typed
            </li>
            <li>
              Keyboard scan codes or key identifiers
            </li>
            <li>
              Screenshots or visual captures
            </li>
            <li>
              Screen recordings or video
            </li>
            <li>
              Clipboard content (only event timing)
            </li>
            <li>
              Window titles or application names
            </li>
            <li>
              User names, email addresses, or identifiers (optional: author
              declaration is user-controlled)
            </li>
            <li>
              IP addresses or network identifiers
            </li>
            <li>
              Location data
            </li>
          </ul>
        </section>

        <section anchor="disclosure-levels">
          <name>Disclosure Levels</name>

          <t>
            The specification supports tiered disclosure
            through optional fields:
          </t>

          <table>
            <thead>
              <tr>
                <th>Level</th>
                <th>Data Included</th>
                <th>Privacy Impact</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>Minimal</td>
                <td>Hashes, VDF proofs, histogram summaries only</td>
                <td>Lowest</td>
              </tr>
              <tr>
                <td>Standard</td>
                <td>+ Presence challenges, forensics section</td>
                <td>Low-Moderate</td>
              </tr>
              <tr>
                <td>Enhanced</td>
                <td>+ Raw timing intervals, keystroke section</td>
                <td>Moderate</td>
              </tr>
              <tr>
                <td>Maximum</td>
                <td>+ Hardware attestation, absence claims</td>
                <td>Higher</td>
              </tr>
            </tbody>
          </table>

          <t>
            Users SHOULD select the minimum disclosure level
            that meets their
            verification requirements. Higher tiers provide
            stronger evidence
            at the cost of revealing more behavioral data.
          </t>
        </section>
      </section>

      <section anchor="biometric-adjacent-data">
        <name>Biometric-Adjacent Data</name>

        <t>
          Keystroke timing data, while not traditionally
          classified as biometric,
          has biometric-adjacent properties that warrant
          special consideration.
          This section addresses regulatory considerations and mitigation
          measures.
        </t>

        <section anchor="keystroke-timing-risks">
          <name>Identification Risks</name>

          <t>
            Research has demonstrated that keystroke dynamics can serve as
            a behavioral biometric:
          </t>

          <ul>
            <li>
              <t>Individual Identification:</t>
              <t>
                Detailed timing patterns can theoretically distinguish
                individuals with high accuracy across sessions.
              </t>
            </li>

            <li>
              <t>State Detection:</t>
              <t>
                Timing variations may correlate with cognitive state,
                fatigue, stress, or physical condition.
              </t>
            </li>

            <li>
              <t>Re-identification Risk:</t>
              <t>
                If an adversary has access to multiple Evidence packets from
                the same author, timing patterns might enable linkage across
                sessions even without explicit identity.
              </t>
            </li>
          </ul>
        </section>

      <section anchor="biometric-mitigations">
        <name>Re-identification Risk Mitigation</name>
        <t>
          To mitigate re-identification risk while preserving correlation utility,
          the protocol implements multiple layered defenses:
        </t>
        <ul>
          <li>
            <t>
              <strong>Timing Value Clipping:</strong> All timing values are clipped
              to the range [0, 5000ms], bounding the sensitivity of timing data and
              preventing outlier values from leaking behavioral information.
            </t>
          </li>
          <li>
            <t>
              <strong>Histogram Bucketing:</strong> Raw intervals are aggregated into
              coarse histogram buckets before commitment, reducing temporal resolution
              below the threshold required for biometric fingerprinting while preserving
              sufficient fidelity for the Spearman rho ≥ 0.7 correlation check.
            </t>
          </li>
          <li>
            <t>
              <strong>Hurst Exponent Validation:</strong> Only intervals exhibiting
              valid long-range temporal dependence (H ∈ [0.55, 0.85]) are accepted,
              filtering synthetic sequences that lack genuine behavioral dynamics.
            </t>
          </li>
        </ul>
      </section>

      <section anchor="isochronous-release">
        <name>Isochronous Data Release (Heartbeat Quantization)</name>
        <t>
          To prevent side-channel leakage via packet arrival timing, the Attesting
          Environment MUST implement "Isochronous Emission." Rather than transmitting
          jitter metrics as they are captured, the system buffers the data and releases
          it in fixed-interval beats (e.g., every 5000ms).
        </t>
        <t>
          This rigid quantization eliminates the information leakage inherent in the
          burstiness of the user's typing. An adversary observing the network traffic
          sees only a constant heartbeat, forcing them to rely entirely on the
          clipped and bucketed histogram content with zero metadata about the
          temporal structure of the input stream.
        </t>
      </section>

      <section anchor="key-rotation-for-privacy">
        <name>Key Rotation for Privacy</name>

        <t>
          Timing data accumulated across multiple sessions from the same
          signing key provides more information for cross-session linkage
          attacks. Periodic key rotation limits the temporal window available
          for such analysis.
        </t>

        <section anchor="key-rotation-requirements">
          <name>Key Rotation Requirements</name>

          <t>
            Signing key rotation policies limit the accumulation of timing
            data under a single key identity:
          </t>

          <dl>
            <dt>Monthly Rotation (REQUIRED):</dt>
            <dd>
              <t>
                Implementations MUST rotate signing keys at least monthly.
                Key metadata SHOULD track the next rotation date and session
                count since key generation.
              </t>
            </dd>

            <dt>Weekly Rotation (RECOMMENDED):</dt>
            <dd>
              <t>
                For high-frequency evidence generation scenarios (more than
                4 sessions per week), weekly key rotation is RECOMMENDED
                to further limit cross-session analysis windows.
              </t>
            </dd>

            <dt>Session-Based Rotation:</dt>
            <dd>
              <t>
                Implementations MAY implement automatic key rotation after
                a configurable number of sessions (e.g., 20 sessions) rather
                than purely time-based rotation.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="rotation-verification">
          <name>Rotation Verification</name>

          <t>
            Verifiers SHOULD validate key rotation compliance:
          </t>

          <ol>
            <li>
              <t>
                Verify that key-valid-from is before the evidence packet
                creation timestamp and key-valid-until is after.
              </t>
            </li>

            <li>
              <t>
                Verify that the key validity period does not exceed the
                maximum allowed rotation interval (e.g., 31 days for monthly).
              </t>
            </li>

            <li>
              <t>
                If session counts are tracked, verify they are within
                recommended limits for the rotation policy.
              </t>
            </li>
          </ol>

          <t>
            Rotation policy violations SHOULD be reported as caveats in the
            attestation result but do not invalidate the evidence packet.
            The primary evidence (VDF, jitter, content binding) remains valid.
          </t>
        </section>
      </section>

        <section anchor="regulatory-considerations">
          <name>Regulatory Considerations</name>

          <t>
            Implementations and deployments should consider applicable
            privacy regulations:
          </t>

          <dl>
            <dt>GDPR (EU/EEA):</dt>
            <dd>
              <t>
                Keystroke dynamics may constitute "special categories of
                personal data" under Article 9 if used for identification
                purposes. Implementations should document whether timing
                data is used for identification (prohibited without explicit
                consent) or solely for process evidence (may fall under
                different legal basis).
              </t>
            </dd>

            <dt>CCPA (California):</dt>
            <dd>
              <t>
                Biometric information is covered under CCPA Section
                1798.140(b). Users have rights to know, delete, and opt-out.
                The local-only processing model simplifies compliance.
              </t>
            </dd>

            <dt>BIPA (Illinois):</dt>
            <dd>
              <t>
                Illinois Biometric Information Privacy Act has strict
                requirements for biometric data collection, including
                written policies and consent. Deployments in Illinois
                should consult legal counsel.
              </t>
            </dd>
          </dl>

          <t>
            The specification's local-only processing model and user control
            over data disclosure support compliance, but
            legal interpretation
            varies by jurisdiction.
          </t>
        </section>

        <section anchor="user-disclosure-requirements">
          <name>User Disclosure Requirements</name>

          <t>
            Implementations MUST inform users about behavioral
            data collection:
          </t>

          <ol>
            <li>
              Clear notification that timing data is captured
              during authoring
            </li>
            <li>
              Explanation of what timing data reveals and does not reveal
            </li>
            <li>
              Disclosure of where Evidence packets may be transmitted
            </li>
            <li>
              User control over disclosure levels (histogram-only vs. raw)
            </li>
            <li>
              Instructions for disabling timing capture if desired
            </li>
            <li>
              Process for reviewing and deleting captured data
            </li>
          </ol>

          <t>
            These disclosures SHOULD be presented before Evidence generation
            begins, not buried in terms of service.
          </t>
        </section>
      </section>

      <section anchor="salt-modes-privacy">
        <name>Salt Modes for Content Privacy</name>

        <t>
          The hash-salt-mode field (CDDL lines 164-168)
          enables privacy-preserving
          verification scenarios where document binding
          should not be globally
          verifiable.
        </t>

        <section anchor="unsalted-mode">
          <name>Unsalted Mode (Value 0)</name>

          <artwork><![CDATA[
    content-hash = H(document-content)
    ]]></artwork>

          <t>
            Properties:
          </t>

          <ul>
            <li>
              Anyone with the document can verify the binding
            </li>
            <li>
              No additional secret required for verification
            </li>
            <li>
              Document existence can be confirmed by any party with content
            </li>
          </ul>

          <t>
            Use cases:
          </t>

          <ul>
            <li>
              Public documents where verification should be open
            </li>
            <li>
              Academic submissions where verifiers have document access
            </li>
            <li>
              Published works where authorship claims should be checkable
            </li>
          </ul>

          <t>
            Privacy implications: Anyone who obtains both the document and
            the Evidence packet can confirm the binding. If document
            confidentiality matters, consider salted modes.
          </t>
        </section>

        <section anchor="author-salted-mode">
          <name>Author-Salted Mode (Value 1)</name>

          <artwork><![CDATA[
    content-hash = H(salt || document-content)
    salt-commitment = H(salt)
    ]]></artwork>

          <t>
            Properties:
          </t>

          <ul>
            <li>
              Author generates and retains the salt
            </li>
            <li>
              Evidence packet contains salt-commitment, not salt
            </li>
            <li>
              Author selectively reveals salt to chosen verifiers
            </li>
            <li>
              Without salt, document-hash relationship cannot be verified
            </li>
          </ul>

          <t>
            Use cases:
          </t>

          <ul>
            <li>
              Confidential documents where author controls verification
            </li>
            <li>
              Selective disclosure to specific reviewers or institutions
            </li>
            <li>
              Manuscripts under review before publication
            </li>
          </ul>

          <t>
            Privacy implications: The author has exclusive control over
            who can verify the document binding. The salt should be stored
            securely; loss of salt means verification becomes impossible.
          </t>
        </section>

        <section anchor="salt-requirements">
          <name>Salt Requirements</name>
          <ul>
            <li>
              Salts MUST be cryptographically random (minimum 256 bits)
            </li>
            <li>
              Salts MUST NOT be derived from predictable values
            </li>
            <li>
              Salt-commitment prevents brute-force guessing for
              short documents
            </li>
            <li>
              Salt loss makes verification impossible; backup appropriately
            </li>
            <li>
              Salt transmission should use secure channels
            </li>
          </ul>
        </section>
      </section>

      <section anchor="identity-pseudonymity">
        <name>Identity and Pseudonymity</name>
        <t>
          The specification supports multiple identity postures, from fully
          anonymous to strongly identified, with user
          control over disclosure.
        </t>

        <section anchor="anonymous-evidence">
          <name>Anonymous Evidence Generation</name>

          <t>
            Evidence packets CAN be generated without any
            identity disclosure:
          </t>

          <ul>
            <li>
              The declaration field (key 17 in evidence-packet) is OPTIONAL
            </li>
            <li>
              Within declaration, author-name (key 3) and author-id (key 4)
              are both OPTIONAL
            </li>
            <li>
              Device keys can be ephemeral, not linked to identity
            </li>
            <li>
              Evidence proves process characteristics without revealing who
            </li>
          </ul>

          <t>
            Anonymous evidence is suitable for contexts where process
            documentation matters but author identity is irrelevant or
            should remain confidential.
          </t>
        </section>

        <section anchor="pseudonymous-evidence">
          <name>Pseudonymous Evidence</name>

          <t>
            Pseudonymous use links evidence to a consistent
            identifier without
            revealing real-world identity:
          </t>

          <ul>
            <li>
              author-id can be a pseudonymous identifier
            </li>
            <li>
              Device key provides cryptographic continuity without identity
            </li>
            <li>
              Multiple works can be linked to same pseudonym if desired
            </li>
            <li>
              Real identity can remain undisclosed
            </li>
          </ul>

          <t>
            Pseudonymous evidence enables reputation building without
            identity exposure.
          </t>
        </section>

        <section anchor="identified-evidence">
          <name>Identified Evidence</name>

          <t>
            For contexts requiring identity binding:
          </t>

          <ul>
            <li>
              author-name and author-id can be populated with real identity
            </li>
            <li>
              Declaration signature (key 6) binds identity claim to evidence
            </li>
            <li>
              Hardware attestation can strengthen device-to-person binding
            </li>
            <li>
              External identity verification is outside specification scope
            </li>
          </ul>

          <t>
            Identity strength depends on the verification context, not the
            specification. The specification provides the mechanism for
            identity claims; verification of those claims is a deployment
            concern.
          </t>
        </section>

        <section anchor="device-binding-identity">
          <name>Device Binding Without User Identification</name>

          <t>
            Hardware attestation (hardware-section) binds evidence to a
            specific device without necessarily identifying the user:
          </t>

          <ul>
            <li>
              Device keys are bound to hardware (TPM, Secure Enclave)
            </li>
            <li>
              Evidence proves generation on a specific device
            </li>
            <li>
              Device ownership is a separate question from
              evidence generation
            </li>
            <li>
              Multiple users of same device produce device-linked evidence
            </li>
          </ul>

          <t>
            Device binding strengthens evidence integrity without requiring
            user identification. It proves "this device" without proving
            "this person."
          </t>
        </section>
      </section>

      <section anchor="data-retention-deletion">
        <name>Data Retention and Deletion</name>

        <t>
          Following RFC 6973 Section 6.2,
          this section addresses
          data lifecycle considerations.
        </t>

        <section anchor="evidence-lifecycle">
          <name>Evidence Packet Lifecycle</name>

          <t>
            Evidence packets are designed as archival artifacts:
          </t>

          <dl>
            <dt>Creation:</dt>
            <dd>
              <t>
                Evidence accumulates during authoring session(s). Packet is
                finalized when authoring is complete.
              </t>
            </dd>

            <dt>Distribution:</dt>
            <dd>
              <t>
                Packet may be transmitted to Verifiers, stored alongside
                documents, or archived for future verification needs.
              </t>
            </dd>

            <dt>Retention:</dt>
            <dd>
              <t>
                Retention period depends on use case. Legal documents may
                require indefinite retention; other contexts may allow
                shorter periods.
              </t>
            </dd>

            <dt>Deletion:</dt>
            <dd>
              <t>
                Once distributed, deletion from all recipients may be
                impractical. Authors should consider disclosure scope
                before distribution.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="user-deletion-rights">
          <name>User Rights to Deletion</name>

          <t>
            Users have the following deletion capabilities:
          </t>

          <ul>
            <li>
              <t>Local Data:</t>
              <t>
                Evidence stored locally can be deleted at any time by the
                author. Implementations SHOULD provide clear deletion
                mechanisms.
              </t>
            </li>

            <li>
              <t>Distributed Evidence:</t>
              <t>
                Once Evidence is transmitted to Verifiers or
                Relying Parties,
                deletion depends on those parties' policies.
                The specification
                cannot enforce deletion of distributed data.
              </t>
            </li>

            <li>
              <t>Attestation Results:</t>
              <t>
                .war files produced by Verifiers are controlled
                by Verifiers.
                Authors may request deletion under applicable privacy laws.
              </t>
            </li>
          </ul>

          <t>
            Authors should understand that distributing
            Evidence creates copies
            outside their control. Privacy-sensitive authors should limit
            distribution scope.
          </t>
        </section>

        <section anchor="external-anchor-permanence">
          <name>External Anchor Permanence</name>

          <t>
            External anchors have special retention characteristics:
          </t>

          <dl>
            <dt>RFC 3161 Timestamps:</dt>
            <dd>
              <t>
                TSA records may be retained by the timestamp authority per
                their policies. Typically includes the hash committed, not
                any document or behavioral data.
              </t>
            </dd>

            <dt>Blockchain Anchors:</dt>
            <dd>
              <t>
                Blockchain records are permanent and immutable by design.
                The anchored hash cannot be deleted from the blockchain.
                This is a feature for evidence permanence but has privacy
                implications.
              </t>
            </dd>

            <dt>OpenTimestamps:</dt>
            <dd>
              <t>
                OTS proofs reference Bitcoin transactions, which are
                permanent. The proof structure can be deleted locally, but
                the Bitcoin transaction remains.
              </t>
            </dd>
          </dl>

          <t>
            Users concerned about data permanence should carefully consider
            whether to use blockchain-based external anchors. RFC 3161
            timestamps offer similar evidentiary value with
            more conventional
            retention policies.
          </t>

          <t>
            IMPORTANT: Only cryptographic hashes are
            anchored, never document
            content or behavioral data. The permanent record is a hash, not
            the underlying information.
          </t>
        </section>
      </section>

      <section anchor="third-party-disclosure">
        <name>Third-Party Disclosure</name>

        <t>
          This section addresses what information is disclosed to various
          parties in the verification workflow,
          following RFC 6973
          Section 5.2 on disclosure.
        </t>

        <section anchor="verifier-disclosure">
          <name>Information Disclosed to Verifiers</name>

          <t>
            When an Evidence packet (.pop) is submitted for verification,
            the Verifier learns:
          </t>

          <ul>
            <li>
              Document hash (content-hash) - NOT the content itself
            </li>
            <li>
              Document size (byte-length, char-count)
            </li>
            <li>
              Authoring timeline (checkpoint timestamps, VDF durations)
            </li>
            <li>
              Behavioral statistics (timing histograms, entropy estimates)
            </li>
            <li>
              Edit patterns (aggregate counts, not content)
            </li>
            <li>
              Optional: Raw timing intervals if disclosed
            </li>
            <li>
              Optional: Author identity if declared
            </li>
            <li>
              Optional: Device attestation if included
            </li>
          </ul>

          <t>
            Verifiers SHOULD NOT:
          </t>

          <ul>
            <li>
              Retain Evidence packets beyond verification needs
            </li>
            <li>
              Use behavioral data for purposes beyond verification
            </li>
            <li>
              Attempt to re-identify anonymous authors from
              behavioral patterns
            </li>
            <li>
              Share Evidence data with unauthorized parties
            </li>
          </ul>

          <t>
            Implementations MAY define Verifier privacy
            policies that authors
            can review before submitting Evidence.
          </t>
        </section>

        <section anchor="relying-party-disclosure">
          <name>Information Disclosed to Relying Parties</name>

          <t>
            Relying Parties consuming Attestation Results (.war) learn:
          </t>

          <ul>
            <li>
              Verification verdict (forensic-assessment)
            </li>
            <li>
              Confidence score
            </li>
            <li>
              Verified claims (specific thresholds met)
            </li>
            <li>
              Caveats and limitations
            </li>
            <li>
              Verifier identity
            </li>
            <li>
              Reference to the original Evidence packet (packet-id)
            </li>
          </ul>

          <t>
            The .war file is designed to provide necessary trust information
            without full Evidence disclosure. Relying Parties needing more
            detail can request the original .pop file.
          </t>
        </section>

        <section anchor="disclosure-minimization">
          <name>Minimizing Disclosure</name>

          <t>
            Authors concerned about disclosure can:
          </t>

          <ol>
            <li>
              Use minimal disclosure tier (histogram-only, no raw intervals)
            </li>
            <li>
              Omit optional sections (keystroke-section, absence-section)
            </li>
            <li>
              Use author-salted mode to control verification access
            </li>
            <li>
              Omit declaration or use pseudonymous identity
            </li>
            <li>
              Select Verifiers with strong privacy policies
            </li>
            <li>
              Limit distribution to necessary Relying Parties
            </li>
          </ol>
        </section>
      </section>

      <section anchor="cross-session-correlation">
        <name>Cross-Session Correlation</name>

        <t>
          This section addresses risks of behavioral fingerprinting across
          sessions and mitigation measures.
        </t>

        <section anchor="correlation-risks">
          <name>Correlation Risks</name>

          <t>
            Multiple Evidence packets from the same author
            may enable linkage:
          </t>

          <dl>
            <dt>Behavioral Fingerprinting:</dt>
            <dd>
              <t>
                Keystroke timing patterns exhibit individual characteristics
                that persist across sessions. An adversary with multiple
                Evidence packets could potentially link them to the same
                author even without explicit identity.
              </t>
            </dd>

            <dt>Device Fingerprinting:</dt>
            <dd>
              <t>
                If device keys are reused across sessions, Evidence packets
                are cryptographically linkable. Hardware attestation makes
                this linkage explicit.
              </t>
            </dd>

            <dt>Stylometric Correlation:</dt>
            <dd>
              <t>
                Edit pattern statistics (though not content) may correlate
                with writing style. Combined with timing data, this could
                strengthen cross-session linkage.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="device-key-rotation">
          <name>Device Key Rotation</name>

          <t>
            To limit cross-session correlation via device keys:
          </t>

          <ul>
            <li>
              <t>Session Keys:</t>
              <t>
                Use per-session derived keys rather than a
                single device key.
                HKDF with session-specific info prevents direct linkage.
              </t>
            </li>

            <li>
              <t>Periodic Rotation:</t>
              <t>
                Rotate device keys periodically (RECOMMENDED: annually).
                Evidence packets signed with different keys are not
                cryptographically linked.
              </t>
            </li>

            <li>
              <t>Context-Specific Keys:</t>
              <t>
                Use different keys for different contexts (e.g., work vs.
                personal) to prevent cross-context linkage.
              </t>
            </li>
          </ul>
        </section>

        <section anchor="session-isolation">
          <name>Session Isolation Properties</name>

          <t>
            The specification provides inherent session isolation:
          </t>

          <ul>
            <li>
              Each Evidence packet has a unique packet-id (UUID)
            </li>
            <li>
              VDF chains are session-specific (session entropy in genesis)
            </li>
            <li>
              No protocol mechanism links sessions together
            </li>
            <li>
              Jitter data is bound to specific segment-based Merkle trees
            </li>
          </ul>

          <t>
            Cross-session linkage requires external analysis, not protocol
            features. The specification does not provide linkage mechanisms.
          </t>
        </section>

        <section anchor="correlation-mitigations">
          <name>Additional Mitigations</name>

          <t>
            Authors concerned about cross-session correlation can:
          </t>

          <ol>
            <li>
              Use coarser histogram buckets to reduce timing precision
            </li>
            <li>
              Omit raw-intervals field
            </li>
            <li>
              Vary devices for different document contexts
            </li>
            <li>
              Use different pseudonyms for different contexts
            </li>
            <li>
              Limit Evidence distribution to minimize adversary access to
              multiple packets
            </li>
          </ol>
        </section>
      </section>

      <section anchor="privacy-threat-analysis">
        <name>Privacy Threat Analysis</name>

        <t>
          Following RFC 6973 Section 5,
          this section analyzes
          specific privacy threats.
        </t>

        <section anchor="threat-surveillance">
          <name>Surveillance</name>

          <t>
            The specification is designed to resist surveillance:
          </t>

          <ul>
            <li>
              No content transmission prevents content-based surveillance
            </li>
            <li>
              Local-only processing prevents network monitoring
            </li>
            <li>
              Optional external anchors transmit only hashes
            </li>
            <li>
              No telemetry or analytics collection
            </li>
          </ul>

          <t>
            The primary surveillance risk is through Evidence packet
            distribution. Authors control this distribution.
          </t>
        </section>

        <section anchor="threat-stored-data">
          <name>Stored Data Compromise</name>

          <t>
            If Evidence packets are compromised:
          </t>

          <ul>
            <li>
              Document content is NOT exposed (hash-only)
            </li>
            <li>
              Behavioral patterns MAY be exposed (timing data)
            </li>
            <li>
              Authoring timeline is exposed (timestamps)
            </li>
            <li>
              If identity declared, author identity is exposed
            </li>
          </ul>

          <t>
            Mitigation: Encrypt Evidence packets at rest.
            Use access controls
            for stored Evidence. Limit retention period where appropriate.
          </t>
        </section>

        <section anchor="threat-correlation">
          <name>Correlation</name>

          <t>
            Correlation threats are addressed in
            <xref target="cross-session-correlation"/>. Key
            mitigations include
            key rotation, histogram aggregation, and distribution limiting.
          </t>
        </section>

        <section anchor="threat-identification">
          <name>Identification</name>

          <t>
            Re-identification threats:
          </t>

          <ul>
            <li>
              Anonymous Evidence MAY be re-identifiable through behavioral
              patterns
            </li>
            <li>
              Histogram aggregation significantly reduces this risk
            </li>
            <li>
              Raw interval disclosure increases re-identification risk
            </li>
            <li>
              Device attestation explicitly identifies devices
            </li>
          </ul>

          <t>
            Authors requiring strong anonymity should use minimal disclosure
            tier without raw intervals and without device attestation.
          </t>
        </section>

        <section anchor="threat-secondary-use">
          <name>Secondary Use</name>

          <t>
            Evidence data could theoretically be used for purposes beyond
            verification:
          </t>

          <ul>
            <li>
              Behavioral analysis for profiling
            </li>
            <li>
              Productivity monitoring
            </li>
            <li>
              Training data for machine learning
            </li>
          </ul>

          <t>
            Mitigation: The specification does not prevent secondary use by
            data recipients. Authors should consider Verifier and Relying
            Party policies before disclosure. Implementations MAY include
            usage restrictions in Evidence packet metadata.
          </t>
        </section>

        <section anchor="threat-disclosure">
          <name>Disclosure</name>

          <t>
            Unauthorized disclosure of Evidence packets:
          </t>

          <ul>
            <li>
              Authors control initial distribution
            </li>
            <li>
              Recipients may further distribute; specification
              cannot prevent
            </li>
            <li>
              Salted modes limit utility of leaked Evidence
            </li>
            <li>
              Anonymous Evidence limits identity exposure on leak
            </li>
          </ul>

          <t>
            Authors should treat Evidence packets as potentially sensitive
            and limit distribution to trusted parties.
          </t>
        </section>

        <section anchor="threat-exclusion">
          <name>Exclusion</name>

          <t>
            The risk that authors cannot participate in systems if they
            decline Evidence generation:
          </t>

          <ul>
            <li>
              Evidence generation is voluntary
            </li>
            <li>
              Disclosure levels are user-controlled
            </li>
            <li>
              Relying Parties may require Evidence for certain contexts
            </li>
            <li>
              The specification does not mandate deployment contexts
            </li>
          </ul>

          <t>
            Deployments should consider whether Evidence requirements create
            exclusionary effects and provide alternatives where appropriate.
          </t>
        </section>
      </section>

      <section anchor="privacy-summary">
        <name>Privacy Properties Summary</name>

        <t>
          This section summarizes the privacy properties provided and not
          provided by the specification.
        </t>

        <section anchor="privacy-provided">
          <name>Privacy Properties Provided</name>

          <dl>
            <dt>Content Confidentiality:</dt>
            <dd>
              <t>
                Document content is never stored in Evidence. Verification
                can occur without content access (using salted modes).
              </t>
            </dd>

            <dt>Keystroke Privacy:</dt>
            <dd>
              <t>
                Individual keystrokes are never recorded. Only timing
                intervals between events are captured, without character
                association.
              </t>
            </dd>

            <dt>Local Control:</dt>
            <dd>
              <t>
                All data processing occurs locally. No external services
                required for Evidence generation.
              </t>
            </dd>

            <dt>Disclosure Control:</dt>
            <dd>
              <t>
                Authors control Evidence distribution, disclosure level,
                and identity exposure.
              </t>
            </dd>

            <dt>Pseudonymity Support:</dt>
            <dd>
              <t>
                Evidence can be generated and verified without real-world
                identity disclosure.
              </t>
            </dd>

            <dt>Selective Verification:</dt>
            <dd>
              <t>
                Salted modes enable author-controlled verification access.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="privacy-limitations">
          <name>Privacy Limitations</name>

          <dl>
            <dt>Behavioral Data Exposure:</dt>
            <dd>
              <t>
                Timing data reveals behavioral patterns. While aggregated,
                this data has biometric-adjacent properties.
              </t>
            </dd>

            <dt>Distribution Not Controlled:</dt>
            <dd>
              <t>
                Once Evidence is distributed, the specification cannot
                control further dissemination or use.
              </t>
            </dd>

            <dt>Cross-Session Linkage Risk:</dt>
            <dd>
              <t>
                Multiple Evidence packets may be linkable through behavioral
                analysis, even with different identities.
              </t>
            </dd>

            <dt>External Anchor Permanence:</dt>
            <dd>
              <t>
                Blockchain anchors create permanent records that cannot be
                deleted.
              </t>
            </dd>

            <dt>Metadata Disclosure:</dt>
            <dd>
              <t>
                Evidence packets reveal document size, authoring timeline,
                and edit statistics even without content.
              </t>
            </dd>
          </dl>
        </section>

        <section anchor="privacy-recommendations">
          <name>Recommendations for Privacy-Sensitive Deployments</name>

          <ol>
            <li>
              Use minimal disclosure tier (histogram-only, no raw intervals)
            </li>
            <li>
              Consider coarser histogram buckets for enhanced privacy
            </li>
            <li>
              Use author-salted mode for confidential documents
            </li>
            <li>
              Avoid blockchain anchors if deletion rights are important
            </li>
            <li>
              Rotate device keys periodically
            </li>
            <li>
              Limit Evidence distribution to necessary parties
            </li>
            <li>
              Review Verifier privacy policies before submission
            </li>
            <li>
              Consider pseudonymous identities where appropriate
            </li>
            <li>
              Provide clear user disclosures about data collection
            </li>
            <li>
              Implement data retention policies aligned with use case
            </li>
          </ol>
        </section>
      </section>
  </section>


    <!-- Section 9: IANA Considerations -->
    <section anchor="error-taxonomy">
      <name>Error Handling and Recovery</name>
      <t>
        Implementations MUST handle verification failures and evidence deficiencies
        according to the following taxonomy. Errors are classified by their impact on
        the forensic-assessment verdict and the required recovery actions.
      </t>
      <table anchor="tbl-error-taxonomy">
        <name>Error Taxonomy</name>
        <thead>
          <tr>
            <th>Error Code</th>
            <th>Description</th>
            <th>Impact</th>
            <th>Recovery Action</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td>ERR_VDF_MISMATCH</td>
            <td>VDF output recomputation failed</td>
            <td>FATAL (Evidence Invalid)</td>
            <td>Reject Evidence packet</td>
          </tr>
          <tr>
            <td>ERR_ENTROPY_LOW</td>
            <td>Jitter entropy below tier threshold</td>
            <td>WARNING (Reduced Confidence)</td>
            <td>Flag in Attestation Result caveats</td>
          </tr>
          <tr>
            <td>ERR_CALIB_GAPPED</td>
            <td>Missing or untrusted calibration</td>
            <td>MAJOR (Tier downgrade)</td>
            <td>Treat as Basic tier evidence</td>
          </tr>
          <tr>
            <td>ERR_CHAIN_GAP</td>
            <td>Non-consecutive sequence numbers</td>
            <td>FATAL (Evidence Tampered)</td>
            <td>Reject Evidence packet</td>
          </tr>
        </tbody>
      </table>
    </section>

  <section anchor="versioning">
    <name>Protocol Versioning and Migration</name>
    <t>
      PPPP uses semantic versioning. Version 1.1 introduced mandatory VDF-jitter
      entanglement. Verifiers MUST support backwards compatibility for v1.0
      (non-entangled) packets but SHOULD flag them with a "Legacy" warning.
      Future versions involving breaking changes to the VDF iteration function
      will increment the major version and require a new CBOR tag.
    </t>
  </section>

  <section anchor="normative-error-handling">
    <name>Normative Error Handling</name>
    <t>
      Verifiers MUST implement the following error handling procedures:
    </t>
    <ul>
      <li><strong>ERR_VDF_MISMATCH</strong>: If the recomputed VDF output does not match the reported segment output, the entire evidence chain MUST be rejected as fraudulent.</li>
      <li><strong>ERR_RESIDENCY_STALE</strong>: If the hardware attestation quote (TPM/SE) is older than 24 hours, the Residency (R) score MUST be degraded to 0.5.</li>
      <li><strong>ERR_ENTROPY_LOW</strong>: If the Behavioral Consistency (B) metric is below 0.7, the result MUST flag "Unverifiable Interactive Process" but SHOULD NOT invalidate the Duo (R, S) proofs.</li>
    </ul>
  </section>

    <section anchor="iana-considerations">
      <name>IANA Considerations</name>

      <t>
        This document requests creation of the "Proof of Process VDF
        Algorithms" registry. This registry contains identifiers for
        Verifiable Delay Function algorithms used in Evidence packets.
      </t>

      <t>
        This document utilizes the following registered CBOR tags and SMI
        Private Enterprise Number (PEN) registry, similar to the structured
        identity anchoring found in RFC 9334.
      </t>

      <section anchor="iana-cbor-tags-registration">
        <name>CBOR Tags Registration</name>
        <t>
          This document requests the registration of the following tags in the
          "CBOR Tags" registry <xref target="IANA.cbor-tags"/>:
        </t>
        <ul>
          <li>Tag: 1347571280, Data Item: PPPP Evidence, Description: Proof of Process Provenance Evidence Packet</li>
          <li>Tag: 1463894560, Data Item: PPPP Result, Description: Proof of Process Provenance Verification Result</li>
          <li>Tag: 1347571281, Data Item: PPPP Compact Ref, Description: Compact Reference to PPPP Evidence</li>
        </ul>
      </section>

      <section anchor="iana-cbor-tags-summary">
        <name>CBOR Tags Registry</name>

        <t>
          The coordination of tags mirrors the suite of identifiers used in
          COSE architectures.
        </t>

        <table anchor="tbl-cbor-tags-summary">
          <name>CBOR Tags Summary</name>
          <thead>
            <tr>
              <th>Tag</th>
              <th>Hex</th>
              <th>ASCII</th>
              <th>Data Item</th>
              <th>Semantics</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1347571280</td>
              <td>0x50505020</td>
              <td>"PPPP"</td>
              <td>map</td>
              <td>PPPP Evidence Packet (.pppp)</td>
            </tr>
            <tr>
              <td>1463894560</td>
              <td>0x57415220</td>
              <td>"WAR "</td>
              <td>map</td>
              <td>PPPP Result (.war)</td>
            </tr>
            <tr>
              <td>1347571281</td>
              <td>0x50505021</td>
              <td>"PPP!"</td>
              <td>map</td>
              <td>PPPP Supplemental Data</td>
            </tr>
          </tbody>
        </table>
      </section>

      <section anchor="iana-pen-registry">
        <name>Private Enterprise Number (PEN) Registry</name>
        <t>
          This specification utilizes SMI Private Enterprise Number **65074**,
          similar to the vendor-specific OID trees used in X.509 PKI.
        </t>
        <artwork><![CDATA[
    1.3.6.1.4.1.65074
      .1 - PPPP Protocol Core
        .1 - Invariant: Residency (R)
        .2 - Invariant: Sequence (S)
        .3 - Invariant: Behavioral Consistency (B)
      .2 - PPPP Evidence Tiers
      .3 - PPPP Revocation Reasons
    ]]></artwork>
      </section>

        <section anchor="iana-tag-attestation-result">
          <name>Tag for Writers Authenticity Report (0x57415220)</name>

          <t>
            The tag value 1463894560 (hexadecimal 0x57415220) corresponds to
            the ASCII encoding of "WAR " and identifies Writers Authenticity
            Report structures. This tag encapsulates an Attestation Result
            produced by Verifiers after appraising Proof of Process Evidence
            Packets.
          </t>

          <t>
            The WAR format conveys verification verdicts, confidence scores,
            and forensic assessments following the IETF RATS
            (Remote ATtestation
            procedureS) architecture. A dedicated tag enables
            zero-configuration
            identification of attestation results, allowing
            Relying Parties to
            distinguish verification outcomes from raw evidence without
            content-type negotiation.
          </t>

          <t>
            The tagged data item is a CBOR map conforming to the
            attestation-result structure defined in
            <xref target="attestation-result-structure"/>
           .
          </t>
        </section>

        <section anchor="iana-tag-compact-ref">
          <name>Tag for Compact Evidence Reference (0x50505021)</name>

          <t>
            The tag value 1347571281 (hexadecimal 0x50505021) corresponds to
            the ASCII encoding of "PPP!" and identifies Compact Evidence
            Reference structures. This tag encapsulates a
            cryptographic pointer
            to a full Proof of Process Evidence Packet.
          </t>

          <t>
            Compact Evidence References are designed for embedding in
            space-constrained contexts such as document
            metadata (PDF XMP, EXIF),
            QR codes, NFC tags, git commit messages, and protocol headers.
            The compact reference contains the packet-id, chain-hash,
            document-hash, and a summary with a cryptographic signature
            binding all fields. A dedicated tag enables zero-configuration
            detection and verification of authorship claims without
            transmitting full evidence packets.
          </t>

          <t>
            The tagged data item is a CBOR map conforming to the
            compact-evidence-ref structure defined in
            <xref target="compact-evidence"/>
           .
          </t>
        </section>

        <section anchor="iana-tag-justification">
          <name>Justification for Dedicated Tags</name>

          <t>
            The four-byte tag values were chosen for the following reasons:
          </t>

          <ul>
            <li>
              <strong>Self-describing format:</strong> The ASCII-based
              mnemonics ("PPPP", "WAR ", "PPP!") enable immediate visual
              identification in hex dumps and debugging contexts.
            </li>
            <li>
              <strong>Zero-configuration detection:</strong> Applications
              can identify Proof of Process data without prior context
              or content-type negotiation.
            </li>
            <li>
              <strong>Interoperability:</strong> Standardized tags enable
              diverse implementations (academic systems,
              publishing platforms,
              verification services) to recognize and process data
              without coordination.
            </li>
            <li>
              <strong>Compact encoding:</strong> Despite being 4-byte tags,
              CBOR's efficient encoding minimizes overhead for these
              application-specific semantic markers.
            </li>
          </ul>
        </section>
  </section>

  <section anchor="iana-eat-profile">
    <name>Entity Attestation Token Profiles Registry</name>

    <t>
      This document requests registration of an EAT profile in the
      "Entity Attestation Token Profiles" registry established by
      <xref target="RFC9711"/>.
    </t>

    <table anchor="tbl-eat-profile">
      <name>EAT Profile Registration</name>
      <thead>
        <tr>
          <th>Profile URI</th>
          <th>Description</th>
          <th>Reference</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>https://example.com/rats/eat/profile/pop/1.0</td>
          <td>witnessd Proof of Process Evidence Profile</td>
          <td>[this document]</td>
        </tr>
      </tbody>
    </table>

    <t>
      Note: The URI https://example.com/rats/eat/profile/pop/1.0 is
      provisional during individual submission. Upon working group
      adoption, registration of an IANA-hosted profile URI will be
      requested (e.g., urn:ietf:params:rats:eat:profile:pop:1.0).
    </t>

    <t>
      The profile defines the following characteristics:
    </t>

    <dl>
      <dt>Profile Version:</dt>
      <dd>1.0</dd>

      <dt>Applicable Claims:</dt>
      <dd>
        All standard EAT claims per RFC 9711, plus the
        custom claims defined in <xref target="iana-cwt-claims"/>.
      </dd>

      <dt>Evidence Format:</dt>
      <dd>
        CBOR-encoded evidence-packet structure with semantic tag
        1347571280.
      </dd>

      <dt>Attestation Result Format:</dt>
      <dd>
        CBOR-encoded attestation-result structure with semantic tag
        1463894560.
      </dd>

      <dt>Domain:</dt>
      <dd>
        Document authorship process attestation, behavioral evidence
        for content provenance.
      </dd>
    </dl>
  </section>

  <section anchor="iana-cwt-claims">
    <name>CBOR Web Token Claims Registry</name>

    <t>
      This document requests registration of custom claims in the
      "CBOR Web Token (CWT) Claims" registry <xref target="IANA.cwt"/>.
      These claims are used within EAT Attestation Results to convey
      witnessd-specific assessment data.
    </t>

    <t>
      Initial registration is requested in the private-use range
      (-70000 to -70010) to enable early implementation. Upon standards
      track advancement, permanent positive claim keys
      will be requested.
    </t>

    <table anchor="tbl-cwt-claims">
      <name>Custom CWT Claims Registration</name>
      <thead>
        <tr>
          <th>Claim Name</th>
          <th>Claim Key</th>
          <th>Claim Value Type</th>
          <th>Claim Description</th>
          <th>Reference</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>pop-forensic-assessment</td>
          <td>-70000</td>
          <td>unsigned integer</td>
          <td>
            Forensic assessment enumeration value (0-5) indicating
            the Verifier's assessment of behavioral evidence consistency
            with human authorship patterns.
          </td>
          <td>[this document]</td>
        </tr>
        <tr>
          <td>pop-presence-score</td>
          <td>-70001</td>
          <td>unsigned integer (millibits)</td>
          <td>
            Presence challenge response score in range 0-1000
            (millibits, divide by 1000 for 0.0-1.0) representing the
            ratio of successfully completed human presence challenges.
          </td>
          <td>[this document]</td>
        </tr>
        <tr>
          <td>pop-evidence-tier</td>
          <td>-70002</td>
          <td>unsigned integer</td>
          <td>
            Evidence tier classification (1-4) indicating the
            comprehensiveness of evidence collected: 1=Basic,
            2=Standard, 3=Enhanced, 4=Maximum.
          </td>
          <td>[this document]</td>
        </tr>
        <tr>
          <td>pop-ai-composite-score</td>
          <td>-70003</td>
          <td>unsigned integer (millibits)</td>
          <td>
            AI indicator composite score in range 0-1000 (millibits,
            divide by 1000 for 0.0-1.0) derived from behavioral forensic
            analysis. Lower values indicate patterns more consistent
            with human authorship.
          </td>
          <td>[this document]</td>
        </tr>
      </tbody>
    </table>

    <t>
      The forensic-assessment enumeration values for
      pop-forensic-assessment
      are defined as:
    </t>

    <table anchor="tbl-forensic-assessment-values">
      <name>Forensic Assessment Enumeration Values</name>
      <thead>
        <tr>
          <th>Value</th>
          <th>Name</th>
          <th>Description</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <td>0</td>
          <td>not-assessed</td>
          <td>Verification incomplete or not attempted</td>
        </tr>
        <tr>
          <td>1</td>
          <td>manual-composition-consistent</td>
          <td>Evidence strongly consistent with manual composition patterns patterns</td>
        </tr>
        <tr>
          <td>2</td>
          <td>manual-composition-likely</td>
          <td>Evidence suggests manual composition patterns patterns</td>
        </tr>
        <tr>
          <td>3</td>
          <td>inconclusive</td>
          <td>Evidence neither confirms nor refutes claims</td>
        </tr>
        <tr>
          <td>4</td>
          <td>automated-assisted-likely</td>
          <td>Evidence consistent with automated assistance patterns in authorship</td>
        </tr>
        <tr>
          <td>5</td>
          <td>automated-insertion-consistent</td>
          <td>Evidence strongly consistent with bulk automated insertion patterns</td>
        </tr>
      </tbody>
    </table>
  </section>

  <section anchor="iana-new-registries">
    <name>New Registries</name>

    <t>
      This document requests IANA to create three new registries under
      a new "witnessd Proof of Process" registry group.
    </t>

    <section anchor="iana-claim-types-registry">
      <name>Proof of Process Claim Types Registry</name>

      <t>
        This document requests creation of the "Proof of Process Claim
        Types" registry. This registry contains the identifiers for
        absence claims that can be asserted and verified in Evidence
        packets.
      </t>

      <section anchor="iana-claim-types-procedures">
        <name>Registration Procedures</name>

        <t>
          The registration procedures for this registry depend on the
          claim type range:
        </t>

        <table anchor="tbl-claim-types-procedures">
          <name>Claim Types Registration Procedures</name>
          <thead>
            <tr>
              <th>Range</th>
              <th>Category</th>
              <th>Registration Procedure</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1-15</td>
              <td>Chain-verifiable claims</td>
              <td>Specification Required</td>
            </tr>
            <tr>
              <td>16-63</td>
              <td>Monitoring-dependent claims</td>
              <td>Specification Required</td>
            </tr>
            <tr>
              <td>64-127</td>
              <td>Environmental claims</td>
              <td>Expert Review</td>
            </tr>
            <tr>
              <td>128-255</td>
              <td>Private use</td>
              <td>First Come First Served</td>
            </tr>
          </tbody>
        </table>

        <t>
          Chain-verifiable claims (1-15) are claims that can be proven
          solely from the Evidence packet without trusting the Attesting
          Environment beyond data integrity. These claims require a
          published specification demonstrating verifiability.
        </t>

        <t>
          Monitoring-dependent claims (16-63) require trust in the
          Attesting Environment's accurate reporting of monitored events.
          Specifications MUST document the trust assumptions.
        </t>

        <t>
          Environmental claims (64-127) relate to the execution
          environment or external conditions. Expert review ensures
          claims are well-defined and implementable.
        </t>

        <t>
          Private use claims (128-255) are available for
          implementation-specific extensions without coordination.
        </t>
      </section>

      <section anchor="iana-claim-types-template">
        <name>Registration Template</name>

        <t>
          Registrations MUST include the following fields:
        </t>

        <dl>
          <dt>Claim Type Value:</dt>
          <dd>Integer identifier in the appropriate range</dd>

          <dt>Claim Name:</dt>
          <dd>Human-readable name (lowercase with hyphens)</dd>

          <dt>Category:</dt>
          <dd>
            One of: computationally-bound, monitoring-dependent,
            environmental, or private-use
          </dd>

          <dt>Description:</dt>
          <dd>Brief description of what the claim asserts</dd>

          <dt>Verification Method:</dt>
          <dd>
            How the claim is verified (for non-private-use claims)
          </dd>

          <dt>Reference:</dt>
          <dd>Document defining the claim</dd>
        </dl>
      </section>

      <section anchor="iana-claim-types-initial">
        <name>Initial Registry Contents</name>

        <t>
          The initial contents of the "Proof of Process Claim Types"
          registry are as follows:
        </t>

        <section anchor="iana-claim-types-computationally-bound">
          <name>Computationally Bound Claims (1-15)</name>

          <table anchor="tbl-computationally-bound-claims">
            <name>Computationally Bound Claim Types</name>
            <thead>
              <tr>
                <th>Value</th>
                <th>Name</th>
                <th>Description</th>
                <th>Reference</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>1</td>
                <td>max-single-delta-chars</td>
                <td>
                  Maximum characters added in any single checkpoint delta
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>2</td>
                <td>max-single-delta-bytes</td>
                <td>
                  Maximum bytes added in any single checkpoint delta
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>3</td>
                <td>max-net-delta-chars</td>
                <td>
                  Maximum net character change across the entire chain
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>4</td>
                <td>min-vdf-duration-seconds</td>
                <td>
                  Minimum total VDF-proven elapsed time in seconds
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>5</td>
                <td>min-vdf-duration-per-kchar</td>
                <td>
                  Minimum VDF-proven time per thousand characters
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>6</td>
                <td>checkpoint-chain-complete</td>
                <td>
                  Segment chain has no gaps (all sequence numbers present)
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>7</td>
                <td>checkpoint-chain-consistent</td>
                <td>
                  All segment hashes and VDF linkages verify correctly
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>8</td>
                <td>jitter-entropy-above-threshold</td>
                <td>
                  Captured jitter entropy exceeds specified bits threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>9</td>
                <td>jitter-samples-above-count</td>
                <td>
                  Number of jitter samples exceeds specified count
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>10</td>
                <td>revision-points-above-count</td>
                <td>
                  Number of revision points (non-monotonic edits) exceeds threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>11</td>
                <td>session-count-above-threshold</td>
                <td>
                  Number of distinct editing sessions exceeds threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>12</td>
                <td>cognitive-load-integrity</td>
                <td>
                  Complexity-timing correlation exceeds threshold
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>13</td>
                <td>intra-session-consistency</td>
                <td>
                  Behavioral timing remains in stable cluster
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>14</td>
                <td>complexity-timing-correlation</td>
                <td>
                  Information density correlates with timing density
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>15</td>
                <td>Unassigned</td>
                <td>Reserved for future computationally-bound claims</td>
                <td>[this document]</td>
              </tr>
            </tbody>
          </table>
        </section>

        <section anchor="iana-claim-types-monitoring-dependent">
          <name>Monitoring-Dependent Claims (16-20)</name>

          <table anchor="tbl-monitoring-dependent-claims">
            <name>Monitoring-Dependent Claim Types</name>
            <thead>
              <tr>
                <th>Value</th>
                <th>Name</th>
                <th>Description</th>
                <th>Reference</th>
              </tr>
            </thead>
            <tbody>
              <tr>
                <td>16</td>
                <td>max-paste-event-chars</td>
                <td>
                  Maximum characters in any single paste event
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>17</td>
                <td>max-clipboard-access-chars</td>
                <td>
                  Maximum total characters accessed from clipboard
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>18</td>
                <td>no-paste-from-ai-tool</td>
                <td>
                  No paste operations from known AI tool applications
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>19</td>
                <td>Unassigned</td>
                <td>Reserved</td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>20</td>
                <td>max-insertion-rate-wpm</td>
                <td>
                  Maximum sustained insertion rate in words per minute
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>21</td>
                <td>no-automated-input-pattern</td>
                <td>
                  No detected automated or scripted input patterns
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>22</td>
                <td>no-macro-replay-detected</td>
                <td>
                  No keyboard macro replay patterns detected
                </td>
                <td>[this document]</td>
              </tr>
              <tr>
                <td>23-63</td>
                <td>Unassigned</td>
                <td>Reserved for future monitoring-dependent claims</td>
                <td>[this document]</td>
              </tr>
            </tbody>
          </table>
        </section>

      <section anchor="iana-vdf-procedures">
        <name>Registration Procedures</name>

        <table anchor="tbl-vdf-procedures">
          <name>VDF Algorithms Registration Procedures</name>
          <thead>
            <tr>
              <th>Range</th>
              <th>Category</th>
              <th>Registration Procedure</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1-15</td>
              <td>Iterated hash VDFs</td>
              <td>Standards Action</td>
            </tr>
            <tr>
              <td>16-31</td>
              <td>Succinct VDFs</td>
              <td>Standards Action</td>
            </tr>
            <tr>
              <td>32-63</td>
              <td>Experimental</td>
              <td>Expert Review</td>
            </tr>
          </tbody>
        </table>

        <t>
          Iterated hash VDFs (1-15) are algorithms where verification
          requires recomputation. Standards Action ensures thorough
          security analysis.
        </t>

        <t>
          Succinct VDFs (16-31) are algorithms with efficient
          verification (e.g., <xref target="Pietrzak2019"/>,
          <xref target="Wesolowski2019"/>). Standards Action ensures
          cryptographic soundness.
        </t>

        <t>
          Experimental algorithms (32-63) may be registered with Expert
          Review for research and interoperability testing. Production
          use requires promotion to Standards Action ranges.
        </t>
      </section>

      <section anchor="iana-vdf-template">
        <name>Registration Template</name>

        <t>
          Registrations MUST include the following fields:
        </t>

        <dl>
          <dt>Algorithm Value:</dt>
          <dd>Integer identifier in the appropriate range</dd>

          <dt>Algorithm Name:</dt>
          <dd>Human-readable name</dd>

          <dt>Category:</dt>
          <dd>One of: iterated-hash, succinct, or experimental</dd>

          <dt>Parameters:</dt>
          <dd>Required CDDL structure for algorithm parameters</dd>

          <dt>Verification Complexity:</dt>
          <dd>Asymptotic verification complexity</dd>

          <dt>Security Assumptions:</dt>
          <dd>Cryptographic assumptions for security</dd>

          <dt>Reference:</dt>
          <dd>Document specifying the algorithm</dd>
        </dl>
      </section>

      <section anchor="iana-vdf-initial">
        <name>Initial Registry Contents</name>

        <table anchor="tbl-vdf-algorithms">
          <name>VDF Algorithms Initial Values</name>
          <thead>
            <tr>
              <th>Value</th>
              <th>Name</th>
              <th>Category</th>
              <th>Verification</th>
              <th>Reference</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1</td>
              <td>iterated-sha256</td>
              <td>iterated-hash</td>
              <td>O(n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>2</td>
              <td>iterated-sha3-256</td>
              <td>iterated-hash</td>
              <td>O(n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>3-15</td>
              <td>Unassigned</td>
              <td>iterated-hash</td>
              <td>-</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>16</td>
              <td>pietrzak-rsa3072</td>
              <td>succinct</td>
              <td>O(log n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>17</td>
              <td>wesolowski-rsa3072</td>
              <td>succinct</td>
              <td>O(1)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>18</td>
              <td>pietrzak-class-group</td>
              <td>succinct</td>
              <td>O(log n)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>19</td>
              <td>wesolowski-class-group</td>
              <td>succinct</td>
              <td>O(1)</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>20-31</td>
              <td>Unassigned</td>
              <td>succinct</td>
              <td>-</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>32-63</td>
              <td>Unassigned</td>
              <td>experimental</td>
              <td>-</td>
              <td>[this document]</td>
            </tr>
          </tbody>
        </table>

        <t>
          The iterated hash algorithms use the iterated-hash-params
          CDDL structure (keys 1-2). The succinct algorithms use the
          succinct-vdf-params CDDL structure (keys 10-11). See
          <xref target="vdf-mechanisms"/> for detailed specifications.
        </t>
      </section>
    </section>

    <section anchor="iana-entropy-sources-registry">
      <name>Proof of Process Entropy Sources Registry</name>

      <t>
        This document requests creation of the "Proof of Process Entropy
        Sources" registry. This registry contains identifiers for
        behavioral entropy sources used in Jitter Seal bindings.
      </t>

      <section anchor="iana-entropy-procedures">
        <name>Registration Procedures</name>

        <t>
          The registration procedure for this registry is Specification
          Required.
        </t>

        <t>
          Registrations MUST include a specification describing:
        </t>

        <ul>
          <li>
            The input modality or behavioral signal being captured
          </li>
          <li>
            The method for converting the signal to timing intervals
          </li>
          <li>
            Privacy implications of capturing this entropy source
          </li>
          <li>
            Expected entropy density (bits per sample) under typical conditions
          </li>
        </ul>
      </section>

      <section anchor="iana-entropy-template">
        <name>Registration Template</name>

        <t>
          Registrations MUST include the following fields:
        </t>

        <dl>
          <dt>Source Value:</dt>
          <dd>Integer identifier</dd>

          <dt>Source Name:</dt>
          <dd>Human-readable name (lowercase with hyphens)</dd>

          <dt>Description:</dt>
          <dd>Brief description of the entropy source</dd>

          <dt>Privacy Impact:</dt>
          <dd>One of: minimal, low, moderate, high</dd>

          <dt>Reference:</dt>
          <dd>Document specifying the entropy source</dd>
        </dl>
      </section>

      <section anchor="iana-entropy-initial">
        <name>Initial Registry Contents</name>

        <table anchor="tbl-entropy-sources">
          <name>Entropy Sources Initial Values</name>
          <thead>
            <tr>
              <th>Value</th>
              <th>Name</th>
              <th>Description</th>
              <th>Privacy Impact</th>
              <th>Reference</th>
            </tr>
          </thead>
          <tbody>
            <tr>
              <td>1</td>
              <td>keystroke-timing</td>
              <td>Inter-key intervals from keyboard input</td>
              <td>moderate</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>2</td>
              <td>pause-patterns</td>
              <td>Gaps between editing bursts (&gt;2 seconds)</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>3</td>
              <td>edit-cadence</td>
              <td>Rhythm of insertions/deletions over time</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>4</td>
              <td>cursor-movement</td>
              <td>Navigation timing within document</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>5</td>
              <td>scroll-behavior</td>
              <td>Document scrolling patterns</td>
              <td>minimal</td>
              <td>[this document]</td>
            </tr>
            <tr>
              <td>6</td>
              <td>focus-changes</td>
              <td>Application focus gain/loss events</td>
              <td>low</td>
              <td>[this document]</td>
            </tr>
          </tbody>
        </table>
      </section>
    </section>
  </section>

  <section anchor="iana-media-types">
    <name>Media Types Registry</name>

    <t>
      This document requests registration of two media types in the
      "Media Types" registry <xref target="IANA.media-types"/>.
    </t>

    <section anchor="iana-media-type-pop">
      <name>application/vnd.example-pop+cbor Media Type</name>

      <dl>
        <dt>Type name:</dt>
        <dd>application</dd>

        <dt>Subtype name:</dt>
        <dd>vnd.example-pop+cbor</dd>

        <dt>Required parameters:</dt>
        <dd>N/A</dd>

        <dt>Optional parameters:</dt>
        <dd>N/A</dd>

        <dt>Encoding considerations:</dt>
        <dd>
          binary. As a CBOR format, it contains NUL octets and
          non-line-oriented data.
        </dd>

        <dt>Security considerations:</dt>
        <dd>
          This media type contains cryptographically anchored evidence
          of authorship process. It does not contain active or executable
          content. Integrity is ensured via a HMAC-SHA256 segment chain
          and Verifiable Delay Functions (VDFs). Privacy is maintained
          through author-controlled salting of content hashes as defined
          in <xref target="salt-modes"/>. Security considerations of
          CBOR <xref target="RFC8949"/> apply. See also
          <xref target="security-considerations"/> of this document.
        </dd>

        <dt>Interoperability considerations:</dt>
        <dd>
          While the +cbor suffix allows generic parsing, full semantic
          validation and behavioral forensic analysis require a
          witnessd-compatible processor as defined in this specification.
          The content is a CBOR-encoded evidence-packet structure with
          semantic tag 1347571280.
        </dd>

        <dt>Published specification:</dt>
        <dd>[this document]</dd>

        <dt>Applications that use this media type:</dt>
        <dd>
          Generation of digital authorship evidence by the witnessd suite
          and WritersLogic integrated editors. Verification services,
          document provenance systems, academic integrity platforms.
        </dd>

        <dt>Fragment identifier considerations:</dt>
        <dd>N/A</dd>

        <dt>Additional information:</dt>
        <dd>
          <dl spacing="compact">
            <dt>Deprecated alias names for this type:</dt>
            <dd>N/A</dd>

            <dt>Magic number(s):</dt>
            <dd>0xD950505020 (CBOR tag encoding at offset 0)</dd>

            <dt>File extension(s):</dt>
            <dd>.pop</dd>

            <dt>Macintosh file type code(s):</dt>
            <dd>N/A</dd>
          </dl>
        </dd>

        <dt>Person and email address to contact for further information:</dt>
        <dd>David Condrey &lt;david@writerslogic.com&gt;</dd>

        <dt>Intended usage:</dt>
        <dd>COMMON</dd>

        <dt>Restrictions on usage:</dt>
        <dd>N/A</dd>

        <dt>Author:</dt>
        <dd>David Condrey</dd>

        <dt>Change controller:</dt>
        <dd>WritersLogic Inc.</dd>

        <dt>Provisional registration:</dt>
        <dd>No</dd>
      </dl>
    </section>

    <section anchor="iana-media-type-war">
      <name>application/vnd.example-war+cbor Media Type</name>

      <dl>
        <dt>Type name:</dt>
        <dd>application</dd>

        <dt>Subtype name:</dt>
        <dd>vnd.example-war+cbor</dd>

        <dt>Required parameters:</dt>
        <dd>N/A</dd>

        <dt>Optional parameters:</dt>
        <dd>N/A</dd>

        <dt>Encoding considerations:</dt>
        <dd>
          binary. As a CBOR-encoded format, it contains NUL octets and
          non-line-oriented data.
        </dd>

        <dt>Security considerations:</dt>
        <dd>
          This media type conveys the final appraisal result (verdict) of
          an authorship attestation. (1) It does not contain active or
          executable content. (2) Integrity and authenticity are provided
          via a COSE signature <xref target="RFC9052"/> that MUST be
          verified against the Verifier's public key. (3) The information
          identifies a specific document by its content hash; privacy is
          managed through the hash-salting protocols defined in
          <xref target="salt-modes"/>. (4) The security considerations for
          CBOR (RFC 8949) and COSE (RFC 9052)
          apply. Users are cautioned not to rely on unsigned or unverified
          .war files for high-stakes authenticity claims. See also
          <xref target="security-considerations"/> of this document.
        </dd>

        <dt>Interoperability considerations:</dt>
        <dd>
          The +cbor suffix allows generic CBOR tools to identify the
          underlying encoding. This format is a specific profile of the
          RATS Attestation Result and references a Proof of Process
          (.pop) evidence packet by UUID as defined in this specification.
          The content is a CBOR-encoded attestation-result structure with
          semantic tag 1463894560.
        </dd>

        <dt>Published specification:</dt>
        <dd>[this document]</dd>

        <dt>Applications that use this media type:</dt>
        <dd>
          Verification and display of authorship scores by publishers,
          academic repositories, literary journals, and the WritersLogic
          verification suite.
        </dd>

        <dt>Fragment identifier considerations:</dt>
        <dd>N/A</dd>

        <dt>Additional information:</dt>
        <dd>
          <dl spacing="compact">
            <dt>Deprecated alias names for this type:</dt>
            <dd>N/A</dd>

            <dt>Magic number(s):</dt>
            <dd>0xD957415220 (CBOR tag encoding at offset 0)</dd>

            <dt>File extension(s):</dt>
            <dd>.war</dd>

            <dt>Macintosh file type code(s):</dt>
            <dd>N/A</dd>
          </dl>
        </dd>

        <dt>Person and email address to contact for further information:</dt>
        <dd>David Condrey &lt;david@writerslogic.com&gt;</dd>

        <dt>Intended usage:</dt>
        <dd>COMMON</dd>

        <dt>Restrictions on usage:</dt>
        <dd>N/A</dd>

        <dt>Author:</dt>
        <dd>David Condrey</dd>

        <dt>Change controller:</dt>
        <dd>WritersLogic Inc.</dd>

        <dt>Provisional registration:</dt>
        <dd>No</dd>
      </dl>
    </section>
  </section>

  <section anchor="iana-expert-review">
    <name>Designated Expert Instructions</name>

    <t>
      The designated experts for the registries created by this document
      should apply the following criteria when evaluating registration
      requests:
    </t>

    <section anchor="iana-expert-claim-types">
      <name>Proof of Process Claim Types Registry</name>

      <t>
        For claim types requiring Specification Required:
      </t>

      <ul>
        <li>
          The specification MUST clearly define what the claim asserts
        </li>
        <li>
          For computationally-bound claims, the specification MUST demonstrate
          that the claim can be verified solely from the Evidence packet
        </li>
        <li>
          For monitoring-dependent claims, the specification MUST
          document the Attesting Environment trust assumptions
        </li>
        <li>
          The claim name SHOULD be descriptive and follow existing
          naming conventions
        </li>
      </ul>

      <t>
        For environmental claims requiring Expert Review:
      </t>

      <ul>
        <li>
          The specification SHOULD describe implementation considerations
        </li>
        <li>
          The claim SHOULD NOT duplicate existing claims
        </li>
        <li>
          Privacy implications SHOULD be documented
        </li>
      </ul>
    </section>

    <section anchor="iana-expert-vdf">
      <name>Proof of Process VDF Algorithms Registry</name>

      <t>
        For experimental algorithms requiring Expert Review:
      </t>

      <ul>
        <li>
          The algorithm MUST be documented with sufficient detail for
          independent implementation
        </li>
        <li>
          Security analysis SHOULD be provided, even if preliminary
        </li>
        <li>
          The algorithm SHOULD NOT be a minor variant of an existing
          registered algorithm
        </li>
        <li>
          Implementation availability is encouraged but not required
        </li>
      </ul>
    </section>

    <section anchor="iana-expert-entropy">
      <name>Proof of Process Entropy Sources Registry</name>

      <t>
        For entropy sources requiring Specification Required:
      </t>

      <ul>
        <li>
          The specification MUST describe how timing intervals are
          derived from the entropy source
        </li>
        <li>
          Expected entropy density under typical conditions SHOULD be
          documented
        </li>
        <li>
          Privacy implications MUST be clearly stated
        </li>
        <li>
          The entropy source SHOULD provide meaningful behavioral signal
          that cannot be trivially simulated
        </li>
      </ul>
    </section>
  </section>
  </section>


  </middle>

  <back>
    <!-- Normative References -->
    <references>
      <name>References</name>

      <references>
        <name>Normative References</name>

        <reference anchor="RFC2119" target="https://www.rfc-editor.org/info/rfc2119">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>

        <reference anchor="RFC8174" target="https://www.rfc-editor.org/info/rfc8174">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>

        <reference anchor="RFC2104" target="https://www.rfc-editor.org/info/rfc2104">
          <front>
            <title>HMAC: Keyed-Hashing for Message Authentication</title>
            <author fullname="H. Krawczyk" initials="H." surname="Krawczyk"/>
            <author fullname="M. Bellare" initials="M." surname="Bellare"/>
            <author fullname="R. Canetti" initials="R." surname="Canetti"/>
            <date month="February" year="1997"/>
          </front>
          <seriesInfo name="RFC" value="2104"/>
          <seriesInfo name="DOI" value="10.17487/RFC2104"/>
        </reference>

        <reference anchor="RFC3339" target="https://www.rfc-editor.org/info/rfc3339">
          <front>
            <title>Date and Time on the Internet: Timestamps</title>
            <author fullname="G. Klyne" initials="G." surname="Klyne"/>
            <author fullname="C. Newman" initials="C." surname="Newman"/>
            <date month="July" year="2002"/>
          </front>
          <seriesInfo name="RFC" value="3339"/>
          <seriesInfo name="DOI" value="10.17487/RFC3339"/>
        </reference>

        <reference anchor="RFC6234" target="https://www.rfc-editor.org/info/rfc6234">
          <front>
            <title>US Secure Hash Algorithms (SHA and SHA-based HMAC and HKDF)</title>
            <author fullname="D. Eastlake 3rd" initials="D." surname="Eastlake"/>
            <author fullname="T. Hansen" initials="T." surname="Hansen"/>
            <date month="May" year="2011"/>
          </front>
          <seriesInfo name="RFC" value="6234"/>
          <seriesInfo name="DOI" value="10.17487/RFC6234"/>
        </reference>

        <reference anchor="RFC8610" target="https://www.rfc-editor.org/info/rfc8610">
          <front>
            <title>Concise Data Definition Language (CDDL): A Notational Convention to Express Concise Binary Object Representation (CBOR) and JSON Data Structures</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="C. Vigano" initials="C." surname="Vigano"/>
            <author fullname="C. Bormann" initials="C." surname="Bormann"/>
            <date month="June" year="2019"/>
          </front>
          <seriesInfo name="RFC" value="8610"/>
          <seriesInfo name="DOI" value="10.17487/RFC8610"/>
        </reference>

        <reference anchor="RFC8949" target="https://www.rfc-editor.org/info/rfc8949">
          <front>
            <title>Concise Binary Object Representation (CBOR)</title>
            <author fullname="C. Bormann" initials="C." surname="Bormann"/>
            <author fullname="P. Hoffman" initials="P." surname="Hoffman"/>
            <date month="December" year="2020"/>
          </front>
          <seriesInfo name="STD" value="94"/>
          <seriesInfo name="RFC" value="8949"/>
          <seriesInfo name="DOI" value="10.17487/RFC8949"/>
        </reference>

        <reference anchor="RFC9052" target="https://www.rfc-editor.org/info/rfc9052">
          <front>
            <title>CBOR Object Signing and Encryption (COSE): Structures and Process</title>
            <author fullname="J. Schaad" initials="J." surname="Schaad"/>
            <date month="August" year="2022"/>
          </front>
          <seriesInfo name="STD" value="96"/>
          <seriesInfo name="RFC" value="9052"/>
          <seriesInfo name="DOI" value="10.17487/RFC9052"/>
        </reference>

        <reference anchor="RFC9334" target="https://www.rfc-editor.org/info/rfc9334">
          <front>
            <title>Remote ATtestation procedureS (RATS) Architecture</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="D. Thaler" initials="D." surname="Thaler"/>
            <author fullname="M. Richardson" initials="M." surname="Richardson"/>
            <author fullname="N. Smith" initials="N." surname="Smith"/>
            <author fullname="W. Pan" initials="W." surname="Pan"/>
            <date month="January" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9334"/>
          <seriesInfo name="DOI" value="10.17487/RFC9334"/>
        </reference>

        <reference anchor="RFC9711" target="https://www.rfc-editor.org/info/rfc9711">
          <front>
            <title>The Entity Attestation Token (EAT)</title>
            <author fullname="L. Lundblade" initials="L." surname="Lundblade"/>
            <author fullname="G. Mandyam" initials="G." surname="Mandyam"/>
            <author fullname="J. O'Donoghue" initials="J." surname="O'Donoghue"/>
            <author fullname="C. Wallace" initials="C." surname="Wallace"/>
            <date month="December" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9711"/>
          <seriesInfo name="DOI" value="10.17487/RFC9711"/>
        </reference>

        <reference anchor="IANA.cbor-tags" target="https://www.iana.org/assignments/cbor-tags">
          <front>
            <title>CBOR Tags</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

        <reference anchor="IANA.media-types" target="https://www.iana.org/assignments/media-types">
          <front>
            <title>Media Types</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

        <reference anchor="IANA.cose" target="https://www.iana.org/assignments/cose">
          <front>
            <title>CBOR Object Signing and Encryption (COSE)</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

        <reference anchor="IANA.cwt" target="https://www.iana.org/assignments/cwt">
          <front>
            <title>CBOR Web Token (CWT) Claims</title>
            <author>
              <organization>IANA</organization>
            </author>
            <date/>
          </front>
        </reference>

      </references>

      <references>
        <name>Informative References</name>

        <reference anchor="RFC3161" target="https://www.rfc-editor.org/info/rfc3161">
          <front>
            <title>Internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP)</title>
            <author fullname="C. Adams" initials="C." surname="Adams"/>
            <author fullname="P. Cain" initials="P." surname="Cain"/>
            <author fullname="D. Pinkas" initials="D." surname="Pinkas"/>
            <author fullname="R. Zuccherato" initials="R." surname="Zuccherato"/>
            <date month="August" year="2001"/>
          </front>
          <seriesInfo name="RFC" value="3161"/>
          <seriesInfo name="DOI" value="10.17487/RFC3161"/>
        </reference>

        <reference anchor="RFC7942" target="https://www.rfc-editor.org/info/rfc7942">
          <front>
            <title>Improving Awareness of Protocol Implementations: The Rough Guide</title>
            <author fullname="Y. Sheffer" initials="Y." surname="Sheffer"/>
            <author fullname="A. Farrel" initials="A." surname="Farrel"/>
            <date month="July" year="2016"/>
          </front>
          <seriesInfo name="RFC" value="7942"/>
          <seriesInfo name="DOI" value="10.17487/RFC7942"/>
        </reference>

        <reference anchor="I-D.ietf-rats-eat">
          <front>
            <title>The Entity Attestation Token (EAT)</title>
            <author fullname="Lundblade" initials="L." surname="Lundblade"/>
            <author fullname="Mandyam" initials="G." surname="Mandyam"/>
            <author fullname="O'Donoghue" initials="J." surname="O'Donoghue"/>
            <date month="February" year="2024"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-eat-28"/>
        </reference>

        <reference anchor="I-D.condrey-rats-pop-examples">
          <front>
            <title>Examples of Proof of Process Provenance (PPPP) Evidence Packets and Attestation Results</title>
            <author fullname="D. Condrey" initials="D." surname="Condrey"/>
            <date month="February" year="2024"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-examples-01"/>
        </reference>

        <reference anchor="I-D.condrey-rats-pop-protocol">
          <front>
            <title>Proof of Process (PoP): A Verifiable Process Transcript Format</title>
            <author fullname="D. Condrey" initials="D." surname="Condrey"/>
            <date month="February" year="2026"/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-protocol-00"/>
        </reference>

        <reference anchor="RFC6973" target="https://www.rfc-editor.org/info/rfc6973">
          <front>
            <title>Privacy Considerations for Internet Protocols</title>
            <author fullname="A. Cooper" initials="A." surname="Cooper"/>
            <author fullname="H. Tschofenig" initials="H." surname="Tschofenig"/>
            <author fullname="B. Aboba" initials="B." surname="Aboba"/>
            <author fullname="J. Peterson" initials="J." surname="Peterson"/>
            <author fullname="J. Morris" initials="J." surname="Morris"/>
            <author fullname="M. Hansen" initials="M." surname="Hansen"/>
            <author fullname="R. Smith" initials="R." surname="Smith"/>
            <date month="July" year="2013"/>
          </front>
          <seriesInfo name="RFC" value="6973"/>
          <seriesInfo name="DOI" value="10.17487/RFC6973"/>
        </reference>

        <reference anchor="RFC9562" target="https://www.rfc-editor.org/info/rfc9562">
          <front>
            <title>Universally Unique IDentifiers (UUIDs)</title>
            <author fullname="K. Davis" initials="K." surname="Davis"/>
            <author fullname="B. Peabody" initials="B." surname="Peabody"/>
            <author fullname="P. Leach" initials="P." surname="Leach"/>
            <date month="May" year="2024"/>
          </front>
          <seriesInfo name="RFC" value="9562"/>
          <seriesInfo name="DOI" value="10.17487/RFC9562"/>
        </reference>

        <reference anchor="I-D.ietf-rats-ar4si" target="https://datatracker.ietf.org/doc/html/draft-ietf-rats-ar4si">
          <front>
            <title>Attestation Results for Secure Interactions</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="T. Fossati" initials="T." surname="Fossati"/>
            <author fullname="W. Pan" initials="W." surname="Pan"/>
            <author fullname="E. Voit" initials="E." surname="Voit"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-ar4si"/>
        </reference>

        <reference anchor="I-D.ietf-rats-epoch-markers" target="https://datatracker.ietf.org/doc/html/draft-ietf-rats-epoch-markers">
          <front>
            <title>RATS Epoch Markers</title>
            <author fullname="H. Birkholz" initials="H." surname="Birkholz"/>
            <author fullname="T. Fossati" initials="T." surname="Fossati"/>
            <author fullname="W. Pan" initials="W." surname="Pan"/>
            <author fullname="C. Bormann" initials="C." surname="Bormann"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-epoch-markers"/>
        </reference>

        <reference anchor="I-D.ietf-rats-ear" target="https://datatracker.ietf.org/doc/html/draft-ietf-rats-ear">
          <front>
            <title>EAT Attestation Results</title>
            <author fullname="T. Fossati" initials="T." surname="Fossati"/>
            <author fullname="S. Frost" initials="S." surname="Frost"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-rats-ear"/>
        </reference>

        <reference anchor="I-D.condrey-rats-pop-schema">
          <front>
            <title>PPPP CDDL Schema</title>
            <author fullname="David Condrey" initials="D." surname="Condrey"/>
            <date/>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-condrey-rats-pop-schema-01"/>
        </reference>

        <reference anchor="Pietrzak2019" target="https://eprint.iacr.org/2018/627">
          <front>
            <title>Simple Verifiable Delay Functions</title>
            <author fullname="K. Pietrzak" initials="K." surname="Pietrzak"/>
            <date year="2019"/>
          </front>
          <seriesInfo name="ITCS" value="2019"/>
        </reference>

        <reference anchor="Wesolowski2019" target="https://eprint.iacr.org/2018/623">
          <front>
            <title>Efficient Verifiable Delay Functions</title>
            <author fullname="B. Wesolowski" initials="B." surname="Wesolowski"/>
            <date year="2019"/>
          </front>
          <seriesInfo name="EUROCRYPT" value="2019"/>
        </reference>

        <reference anchor="OpenTimestamps" target="https://opentimestamps.org">
          <front>
            <title>OpenTimestamps: Scalable, Trust-Minimized, Distributed Timestamping with Bitcoin</title>
            <author fullname="P. Todd" initials="P." surname="Todd"/>
            <date year="2016"/>
          </front>
        </reference>

        <reference anchor="Grudin1983">
          <front>
            <title>Error patterns in novice and skilled transcription typing</title>
            <author fullname="J. Grudin" initials="J." surname="Grudin"/>
            <date year="1983"/>
          </front>
        </reference>
        <reference anchor="Rayner1998">
          <front>
            <title>Eye movements in reading and information processing: 20 years of research</title>
            <author fullname="K. Rayner" initials="K." surname="Rayner"/>
            <date year="1998"/>
          </front>
          <seriesInfo name="Psychological Bulletin" value="124(3)"/>
        </reference>
        <reference anchor="Mandelbrot1982">
          <front>
            <title>The Fractal Geometry of Nature</title>
            <author fullname="B. Mandelbrot" initials="B." surname="Mandelbrot"/>
            <date year="1982"/>
          </front>
        </reference>

        <reference anchor="Kushniruk1991">
          <front>
            <title>Cognitive processes in the design of user interfaces</title>
            <author fullname="A. W. Kushniruk" initials="A." surname="Kushniruk"/>
            <date year="1991"/>
          </front>
          <seriesInfo name="Ergonomics" value="34(10)"/>
        </reference>

        <reference anchor="TPM2.0" target="https://trustedcomputinggroup.org/resource/tpm-library-specification/">
          <front>
            <title>TPM 2.0 Library Specification</title>
            <author>
              <organization>Trusted Computing Group</organization>
            </author>
            <date year="2019"/>
          </front>
        </reference>

      </references>

    </references>




    <!-- Acknowledgments -->
    <section anchor="acknowledgments" numbered="false">
      <name>Acknowledgments</name>
      <t>
        The authors would like to thank the members of the IETF RATS working
        group for their foundational work on remote attestation architectures
        that this specification builds upon.
      </t>
      <t>
        Special thanks to the reviewers and contributors who provided feedback
        on early drafts of this specification.
      </t>
      <!-- Placeholder for additional acknowledgments -->
    </section>

    <!-- Document History -->
    <section anchor="document-history" numbered="false" removeInRFC="true">
      <name>Document History</name>

      <section anchor="history-01" numbered="false">
        <name>draft-condrey-rats-pop-01</name>
        <t>
          Revision addressing working group feedback.
        </t>
        <ul>
          <li>Reframed core claim around source consistency analysis
          and decision history rather than authorship verification</li>
          <li>Added Evidence Flow section mapping RATS passport model</li>
          <li>Added Decision History framework capturing edit operation
          topology without content access</li>
          <li>Added Privacy-Preserving Document Classification</li>
          <li>Added Input Event Trust Boundary with tier-mapped
          adversary model</li>
          <li>Added Source Consistency transition pattern taxonomy</li>
          <li>Replaced interactive Vise handshake with non-interactive
          local behavioral analysis consistent with implementation</li>
          <li>Rewrote abstract and introduction for clarity</li>
          <li>Addressed relay, replay, and diversion attack concerns</li>
        </ul>
      </section>

      <section anchor="history-00" numbered="false">
        <name>draft-condrey-rats-pop-00</name>
        <t>
          Initial submission.
        </t>
        <ul>
          <li>Defined Evidence Packet (.pop) and Attestation Result (.war) formats</li>
          <li>Specified Jitter Seal mechanism for behavioral entropy capture</li>
          <li>Specified VDF mechanisms for temporal ordering proofs</li>
          <li>Defined absence proof taxonomy with trust requirements</li>
          <li>Established forgery cost bounds methodology</li>
          <li>Documented security and privacy considerations</li>
          <li>Requested IANA registrations for CBOR tags, media types, and EAT claims</li>
        </ul>
      </section>

    </section>

    <section anchor="circuit-constraints" numbered="false">
      <name>Appendix: Verification Constraint Summary</name>
      <t>
        For interoperability, PPPP-compliant Verifiers MUST validate the
        following constraints on Evidence Packets:
      </t>
      <ol>
        <li><strong>VDF Continuity:</strong> H(out_{i-1}, content_i, jitter_i) === in_i for all checkpoints.</li>
        <li><strong>Temporal Monotonicity:</strong> Each checkpoint timestamp strictly exceeds its predecessor.</li>
        <li><strong>Chain Integrity:</strong> SHA-256 hash chain is unbroken from genesis to final checkpoint.</li>
        <li><strong>Entropy Commitment:</strong> HMAC binding between behavioral entropy and checkpoint content is valid.</li>
        <li><strong>VDF Sequential Proof:</strong> Pietrzak proof verifies for declared iteration count at each checkpoint.</li>
        <li><strong>Source Consistency:</strong> Edit operation distribution and timing patterns evaluated for coherence across checkpoint chain (informational, not pass/fail).</li>
      </ol>
    </section>

    <section anchor="vdf-test-vectors" numbered="false">
      <name>Appendix: VDF Verification Test Vectors</name>
      <t>
        The following test vectors (SHA-256 Iterated Hash) are provided for
        interoperability testing:
      </t>
      <artwork><![CDATA[
Input (Seed): "witnessd-genesis-v1" (hex: 7769746e657373642d67656e657369732d7631)
Iterations: 10,000
Output (Expected): 7d3c9a4f... (Full 32-byte hash)

Input (Entangled): "DST_CHAIN" || H(content) || Output_n-1
Iterations: 50,000
Output (Expected): b1a2c3d4...
]]></artwork>
    </section>

  </back>

</rfc>