TSUNAGI Architecture

The layers of the operator cockpit — deterministic Cardano infrastructure in Zig

TSUNAGI is a self-observing, decision-aware Cardano validator built in Zig. It compiles to a single static binary with three external dependencies: Zig, LMDB, and libsodium. The architecture separates concerns into distinct runtime subsystems connected through structured callbacks and bounded in-memory stores, with an observability layer that converts runtime events into decision metrics.

This page describes the real implemented system as it exists today, including what is built, what is offline-only, and what is not yet implemented.

  Cardano Network (relay peers)
        |
        v
  ┌─────────────────────────────────────────────────┐
  │  Ouroboros Protocol Engine                       │
  │  ChainSync / BlockFetch / TxSubmission           │
  │  KeepAlive / Mini-Protocol Multiplexer           │
  └────────────────────┬────────────────────────────┘
                       |
                       v
  ┌─────────────────────────────────────────────────┐
  │  Block Processing Pipeline                       │
  │  CBOR decode → tx decode → delta extraction      │
  │  shadow verification → state application         │
  └────────┬───────────┬───────────┬────────────────┘
           |           |           |
           v           v           v
  ┌────────────┐ ┌───────────┐ ┌───────────────────┐
  │ LMDB Store │ │ Journals  │ │ Observability     │
  │ UTxO set   │ │ (ndjson)  │ │ Stores            │
  │ undo log   │ │ 4 files   │ │ (ring buffers)    │
  │ coverage   │ │ append    │ │ bounded, in-mem   │
  └────────────┘ └─────┬─────┘ └─────────┬─────────┘
                       |                 |
                       v                 v
              ┌──────────────────────────────────┐
              │ JSON Generation Layer             │
              │ 11 shell scripts                  │
              │ jq-first / sed-fallback           │
              └──────────────┬───────────────────┘
                             |
                             v
              ┌──────────────────────────────────┐
              │ Web Surfaces                      │
              │ operator / explorer / status      │
              │ labs / decode / network           │
              └──────────────────────────────────┘
Figure 1 — TSUNAGI system overview showing data flow from network peers through the block pipeline to persistence, journals, observability stores, JSON generation, and web surfaces.
Decision Pipeline

TSUNAGI processes raw runtime events into structured signals, aggregates them, and derives decision metrics that describe node behavior.

   EVENTS      (chain events, mempool events, peer updates, forge outcomes)
      |
      v
   SIGNALS    (6 normalized [0,1] signals with per-signal confidence and drift)
      |
      v
   AGGREGATION (execution-quality score, peer scoring, adaptive success rate)
      |
      v
   DECISION   (Bayesian alpha/beta posterior, EV, Kelly, LLR)
      |
      v
   (POLICY    — future: adaptive + autonomous modes)
Figure 1b — Decision pipeline. Non-blocking, deterministic, no impact on forge path. Full reference at /decision.
Runtime Core
Ouroboros Protocol Engine Network
Full Ouroboros NodeToNode protocol implementation. ChainSync for header tracking, BlockFetch for block retrieval, TxSubmission for mempool interaction, and KeepAlive for connection liveness. A mini-protocol multiplexer manages concurrent protocol sessions over a single TCP connection.
Block Processing Pipeline Decode + Extract
Each block passes through a deterministic pipeline: CBOR decode, per-transaction summary extraction (inputs, outputs, fee, metadata), UTxO delta computation (consumed/produced/net), and state application. A shadow ledger path processes every block independently to verify parity with the primary path. Divergence is detected within one block.
LMDB Persistent Storage Persistence
Transactional persistent storage with atomic writes. LMDB holds the UTxO set, undo history, and coverage state. Three persistence modes: memory-only for speed, LMDB-native for progressive convergence, LMDB-truth for full verified persistence. Snapshot bootstrap via TSF2 format with Blake2b-256 digest and Ed25519 signature.
Cursor Persistence Chain Tip State
The chain tip (slot, block number, tip hash) is persisted to cursor.json after every block. On restart the node resumes from the persisted cursor. The cursor file is the authoritative record of sync progress.
Observability Layer

TSUNAGI observability is passive. The runtime records events to bounded in-memory stores and append-only journal files. Shell scripts extract journal data into JSON endpoints. Web pages poll those endpoints. No observability component modifies runtime behavior.

Journals
Append-Only Event Logs
4 journal files in NDJSON format: journal.ndjson (roll_forward, roll_backward, tx_decode), mempool.ndjson (mempool_tx), slot_observatory.ndjson (slot_observation), peer_observatory.ndjson (peer_event). Plus confirmation.ndjson for confirmed transactions.
Stores
Bounded Ring Buffers
Fixed-capacity in-memory stores with no dynamic allocation. DeltaHistory (200 blocks), BlockDecodeStore (100), TxSummaryHistory (50), MempoolSummaryStore (128), SlotObservatory (512), PeerObservatory (512), ConfirmationTracker (256+256).
Block Pipeline
Mempool Observatory
Transactions submitted through the local TxSubmission path are decoded at submission time: inputs, outputs, fee, metadata, canonical txid. Summaries are recorded to the mempool journal and ring buffer without blocking submission.
Correlation
Confirmation Tracking
Locally submitted transactions are correlated with block inclusion using exact txid matching (blake2b256 of CBOR tx body). The confirmation tracker records submission time, confirmation time, and the confirming block. Bounded and local-only.
Consensus
Slot Observatory
Records block arrival rhythm across the chain timeline. Each observation captures the inter-block gap (empty slots between consecutive blocks). Passive recording — no influence on consensus.
Network
Peer Observatory
Captures peer connect/disconnect events at the transport boundary and block arrival events. Only events that the runtime can truthfully attribute are recorded. The TxSubmission protocol is pull-based, so inbound peer transactions are not observable.
JSON Generation & Web Surface

The runtime produces journals and in-memory stores. Shell scripts read these sources and generate static JSON files. Web pages fetch the JSON on a 10-second polling interval. This separation means the web layer has zero coupling to the Zig runtime.

JSON EndpointSourceWeb Consumer
operator.jsoncursor.json, journal.ndjsonOperator dashboard
status.jsoncursor.json, journal.ndjsonStatus page
blocks.jsonjournal.ndjson, tx_decode.ndjsonExplorer
delta.jsonjournal.ndjsonExplorer
decode.jsontx_decode.ndjsonDecode page
network.jsonpeer_observatory.ndjsonNetwork page, Operator
slot-observatory.jsonslot_observatory.ndjsonSlot page, Operator
mempool.jsonmempool.ndjsonMempool page, Operator
confirmation.jsonconfirmation.ndjson, mempool.ndjsonConfirmation page, Operator
health.jsonoperator.json, network.json, etc.Operator dashboard
producer-status.jsonproducer-bridge CLIOperator dashboard
Health Engine

A composite health score aggregates five independently scored components into a single 0–100 value. All arithmetic is integer-only with no floating point. The score classifies the node as excellent (≥90), healthy (≥75), degraded (≥60), or critical (<60).

ComponentWeightSignal
Peer25%Disconnect/connect ratio from peer observatory
Slot20%Average inter-block gap from slot observatory
Block25%Rollback rate from block pipeline counters
Confirm15%Average confirmation time for locally submitted txs
Mempool15%Pending transaction count from mempool tracker
Health Score Design

Missing components receive a neutral score of 80, so the health engine degrades gracefully when observability data is unavailable. The Zig runtime computes health inline; a shell mirror script (generate-health-json.sh) produces the same score from the same inputs for the web layer.

Block Pipeline Detail
  Block Body (CBOR bytes)
        |
        v
  ┌─────────────────────┐     ┌─────────────────────┐
  │ Transaction Decode   │     │ Shadow Ledger Path   │
  │ inputs, outputs,     │     │ independent decode   │
  │ fee, metadata        │     │ parity verification  │
  └──────────┬──────────┘     └──────────┬──────────┘
             |                           |
             v                           v
  ┌─────────────────────┐     ┌─────────────────────┐
  │ Delta Extraction     │     │ Shadow Delta Check   │
  │ consumed / produced  │     │ must match primary   │
  │ net UTxO change      │     │ divergence = error   │
  └──────────┬──────────┘     └─────────────────────┘
             |
     ┌───────┼───────┬──────────────┐
     |       |       |              |
     v       v       v              v
  LMDB   journal  delta_history  tx_summary
  state   .ndjson  ring buffer   ring buffer
  apply   append   (cap 200)     (cap 50)
Figure 2 — Block pipeline showing primary and shadow paths, delta extraction, and output sinks.
Producer Readiness

TSUNAGI includes an offline producer readiness harness. This evaluates whether the node would be elected leader for a given slot and assembles a local candidate block, but never broadcasts it. The producer path is entirely offline and local.

Producer Readiness Harness Evaluation
Standalone module that evaluates Praos leadership eligibility using E34 fixed-point threshold arithmetic. Checks VRF output against stake-weighted threshold, validates KES key period and expiration, performs Sum6Kes sign + verify round trip, and assembles a local candidate block (header + empty body in CBOR). State contract: READY / NOT_LEADER / NOT_READY.
Producer Bridge Runtime Integration
Maps runtime-shaped inputs (cursor slot, genesis config, ENV-loaded VRF/KES/stake material) to the readiness evaluation module. Pure computation with no file IO, no network IO, no broadcast. CLI command reads cursor.json and environment variables, computes epoch/KES period, and produces a JSON status report.
Artifact Bundle Offline Output
When the readiness evaluation returns READY, the harness can optionally write 5 artifact files to disk: candidate header CBOR, candidate body CBOR, full assembled block CBOR, hash manifest, and the readiness report JSON. These artifacts are written to a local directory and are never transmitted.
  cursor.json ──┐
  genesis.json ─┤
  ENV vars ─────┤
  (VRF, KES,    │
   stake, pool) │
                v
  ┌──────────────────────────────┐
  │ Producer Bridge              │
  │ compute epoch, KES period    │
  │ build BridgeInputs           │
  └──────────────┬───────────────┘
                 |
                 v
  ┌──────────────────────────────┐
  │ Readiness Evaluation         │
  │ VRF threshold check          │
  │ KES validity check           │
  │ sign + verify round trip     │
  │ candidate block assembly     │
  └──────────────┬───────────────┘
                 |
        ┌────────┴────────┐
        v                 v
  NOT_LEADER          READY
  (exit, report)      |
                      v
              ┌──────────────┐
              │ Artifact      │
              │ Bundle (opt)  │
              │ 5 files       │
              │ to local disk │
              └──────────────┘
Figure 3 — Producer readiness evaluation flow. All paths are offline. No network broadcast occurs.
Current Boundaries

TSUNAGI is a working follower node with full observability and an execution-aware forge pipeline proven on the preview network. The following table shows what is implemented and what is not.

CapabilityStatus
ChainSync / BlockFetchLive Full Ouroboros protocol
TxSubmissionLive Local mempool submission path
Block decode + delta extractionLive Deterministic pipeline
LMDB UTxO persistenceLive Three modes, snapshot bootstrap
Shadow ledger verificationLive Per-block parity check
Journals + JSON + web dashboardsLive Full observability pipeline
Health engineLive 5-component weighted scoring
Mempool validation (13-check)Live Structural, balance, fee, witness, native script, Plutus boundary
Forge gate + rankingLive Execution-aware candidate evaluation, deterministic scoring
Forge candidate constructionLive Real KES-signed block candidates with embedded transactions (preview)
Deterministic candidate reorderingLive Opt-in, policy-gated, mixed-score changed=1 proven on preview
Trust signals + aggregate evidenceLive Multi-source (forge + mempool), normalized aggregate output
Trust score + decision metricsLive Observe-only BAeY / EV / Kelly / LLR from live evidence (preview)
Decision advisoryLive Observe-only stance, confidence, allocation from live metrics (preview)
Shadow policy previewLive Shadow-only recommendation + strength from advisory (preview)
Shadow policy impactLive Shadow-only domain simulation: ordering/enforcement touch (preview)
Policy-coupled orderingLive Preview-only, operator-enabled ordering activation from policy impact (preview)
/decision operator surfaceLive Full decision stack in one HTTP endpoint (preview)
External evaluator boundaryLive Protocol v1, known-case fingerprint matching, rule-based fallback
Slot + peer observatoriesLive Passive recording
Full Plutus/UPLC executionNot yet implemented (rule-based + known-case boundary only)
General adaptive policyNot yet implemented (coupling is preview-only experiment, ordering-only)
Network block broadcastNot yet implemented (candidates constructed but not broadcast)
Mainnet block productionNot yet implemented
Proven on Preview, Not Yet Broadcasting

TSUNAGI constructs real KES-signed block candidates with embedded transactions on the preview network. The full forge evidence pipeline — gate evaluation, deterministic ranking, external boundary checking, trust-signal emission, and candidate reordering — has been validated under accepted-transaction load with a real relay connected. Mixed-score candidate reordering has been proven with visible changed=1 evidence using size-aware differentiation for accepted non-Plutus transactions. Observe-only trust scoring, BAeY / EV / Kelly / LLR decision metrics, a deterministic decision advisory (stance, confidence, allocation), a shadow policy preview (recommendation, strength), and a shadow policy impact simulation (ordering/enforcement domain touch) are derived from live evidence during each forge cycle. A preview-only policy-coupled ordering experiment has been proven: shadow policy output activated the reorder path while forge policy was observe, demonstrating explicit operator-enabled policy coupling. Block candidates are not yet broadcast to the network. Policy coupling is preview-only and ordering-only. General adaptive policy, full Plutus execution, and consensus-equivalent parity are not yet implemented.

Design Principles
Core
Determinism
Identical inputs produce identical outputs at every stage. The ledger pipeline, delta extraction, and block decode are fully deterministic. No stage introduces randomness or non-reproducible behavior.
Memory
Bounded Storage
All in-memory observability stores use fixed-capacity ring buffers with no dynamic allocation. Capacities are compile-time constants. The node's memory footprint does not grow with chain length or runtime duration.
Observability
Passive Recording
Observability is read-only. Journals append, stores record, JSON scripts extract. Nothing in the observability layer modifies block processing, ledger state, or protocol behavior.
Dependencies
Minimal Stack
Zig 0.15.2, LMDB, and libsodium. No framework, no package manager, no build system beyond Zig's built-in. Single static binary output. Shell scripts use only standard POSIX utilities plus optional jq.
The Operator Cockpit
Five Named Layers
TSURUGI 剣 Execution ChainSync, BlockFetch, protocol operations
TATE 盾 Defense Rollback protection, ancestry validation, fragment coverage
YAMORI 家守 Guardian Health monitoring, safety, node integrity
KURA 蔵 Storage UTxO + historical state, stabilized artifacts
KAGAMI 鏡 Reflection Observability, diagnostics, /status/full

These layers together form the operator cockpit — a complete, real-time view of the node's behavior and decisions.

Design Lineage
Conceptual Layer Model

The five named layers originate from Japanese functional metaphors and shaped the architectural separation of concerns that the current implementation inherits. The full philosophy is described in the manifesto; this page documents the system as built.

Let It Run. Let It Resolve.

tsunagi.tech · Independent Cardano infrastructure research