Architecture

How Asupersync enforces cancel-correctness, structured concurrency, and deterministic testing.

Overview

Runtime Architecture

Asupersync's runtime is built on three pillars: Regions for structure, the Cancel Protocol for cleanup, and the Lab for testing.

APPLICATION LAYERRegion Tree (structured concurrency scopes)CANCEL PROTOCOL1. Request2. Drain3. FinalizeSCHEDULERCancel Lane (hi)Timed Lane (mid)Ready Lane (lo)
Core Model

Structured Concurrency

Every task lives inside a Region. Regions form a tree. When a parent Region closes, all children are cancelled — automatically.

regions.rs
01// Regions scope task lifetimes02Region::open(cx, "server", 
async
|cx| {
03 // Child tasks are bound to this Region04 cx.spawn("worker-1", handle_requests());05 cx.spawn("worker-2", process_queue());06 07 // When this Region closes, workers are cancelled08 // via the cancel protocol (not silently dropped)09 cx.shutdown_signal().
await
;
10}).
await
Syntax_Validation_Active
UTF-8_ENCODED

Unlike Tokio's tokio::spawn which creates globally-scoped tasks, Asupersync tasks are always scoped to a Region. This eliminates orphaned futures — if you can't see the task, it can't outlive you.

Safety

Cancel Protocol

Three phases ensure resources are always cleaned up, even during cancellation.

1. Request

Cancel signal sent. Task sees CancelRequested on its Cx. Can begin graceful shutdown.

2. Drain

Task gets budgeted time to finish in-flight work. Flush buffers, close connections, release locks.

3. Finalize

Registered finalizers run in LIFO order. Like defer in Go, but integrated with the cancel protocol.

Security

Capability Tiers

Five hierarchical permission levels enforce the principle of least authority. Every task gets exactly the permissions it needs — nothing more.

FiberCapCompute-only. No spawning, no I/O, no timers.TaskCapCan spawn child tasks within its Region.IoCapNetwork, filesystem, and timer access.RemoteCapCross-machine communication and data transfer.SupervisorCapCan manage, restart, and cancel other tasks.

Each capability tierCapability TierHierarchical permission level for tasksA five-level permission hierarchy: FiberCap (compute-only), TaskCap (can spawn children), IoCap (network/filesystem access), RemoteCap (cross-machine communication), and SupervisorCap (can manage and restart other tasks). Each tier extends the one below it, enforcing the principle of least authority. extends the one below it. A task with FiberCapFiberCapMinimal capability: compute-only, no I/OThe lowest capability tier. A task with only FiberCap can perform pure computation — no spawning, no I/O, no timers. It's the most restricted permission level, ideal for untrusted or sandboxed computations. can only compute — it cannot spawn, perform I/O, or set timers. IoCapIoCapCapability for network and filesystem accessA mid-tier capability that extends TaskCap with permission to perform I/O operations: reading and writing files, opening network connections, and setting timers. Required for any task that needs to interact with the outside world. adds network and filesystem access. At the top, SupervisorCap can manage and restart other tasks.

This hierarchy ensures that if a task is compromised or buggy, its blast radius is limited to its granted capabilities. A compute-only fiber cannot exfiltrate data over the network. An I/O task cannot restart other tasks.

Formal Foundations

Small-Step Semantics

35 transition rules define every possible async state change. Each rule is mechanized in Lean 4 for machine-checked correctness.

Think of small-step semanticsSmall-Step SemanticsFormal rules defining single computation stepsA mathematical framework where each computation step is defined by a transition rule of the form ⟨e, σ⟩ → ⟨e′, σ′⟩ (expression e in state σ steps to expression e′ in state σ′). Asupersync has 35 such rules covering spawning, cancellation, effect reservation, region closure, and more — each mechanized in Lean 4 for machine-checked correctness. like a recipe card for the runtime. Each transition ruleTransition RuleOne formal rule in the small-step semanticsA single rule in the small-step operational semantics that defines how one kind of computation step works. For example, the SPAWN rule defines what happens when a task is created, and the CANCEL-PROPAGATE rule defines how cancellation flows through the Region tree. Asupersync has 35 such rules, all mechanized in Lean 4. says: “If the system is in state X, then one computation step produces state Y.” By chaining rules together, you can trace any execution from start to finish — and prove that no step can violate safety guarantees.

SPAWN
⟨spawn(e), σ⟩ → ⟨permit(id), σ[id ↦ Running(e)]⟩

Spawning a task produces a Permit and registers the new task as Running in the state.

CANCEL-PROPAGATE
⟨cancel(r), σ⟩ → ⟨(), σ[t.state ↦ CancelRequested | t ∈ r]⟩

Cancelling a Region marks every task in it as CancelRequested.

DRAIN-TICK
⟨tick, σ[t: Draining(fuel)]⟩ → ⟨tick, σ[t: Draining(fuel-1)]⟩

Each scheduler tick decrements Cancel Fuel, guaranteeing termination.

EFFECT-RESERVE
⟨reserve(eff), σ⟩ → ⟨handle(id), σ[id ↦ Reserved(eff)]⟩

Staging an effect creates a reversible reservation — not yet committed.

CHECKPOINT-MASKED
⟨checkpoint, σ[t: CancelRequested]⟩ → ⟨Cancelled, σ[t: Draining]⟩

If a cancel is pending at a checkpoint, the task transitions to Draining and returns Cancelled.

CLOSE-CANCEL-CHILDREN
⟨close(r), σ⟩ → ⟨(), σ[t.state ↦ CancelRequested | t ∈ children(r)]⟩

Closing a Region first cancels all children, cascading down the Region tree.

These 6 rules are a sample — the full system has 35 rules covering checkpoint, commit, rollback, supervisor restart, and more. See the Spec Explorer for the complete formal specification.

Testing Infrastructure

17 Test Oracles

The Lab runtime ships with 17 independent correctness monitors. Each oracle watches for a specific class of concurrency bug.

Each test oracleTest OracleRuntime correctness monitor for Lab testingA specialized runtime monitor that checks a specific class of correctness invariant during Lab testing. Each of the 17 built-in oracles watches for a different kind of bug — from resource leaks to protocol violations to budget overruns — acting as an automated auditor for your concurrent code. is an independent auditor. When you run a LabLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. test, all 17 oracles are active simultaneously — checking for resource leaks, protocol violations, budget overruns, and more. Combined with DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. (think of it as a maze solver that only explores paths that lead to different outcomes), the Lab systematically finds bugs that random testing would miss.

TaskLeakDetects tasks that escape their Region scope
ObligationLeakCatches Permits/Leases dropped without consumption
CancelProtocolVerifies 3-phase cancel contract compliance
BudgetOverrunFlags tasks exceeding their drain budget
RegionNestingValidates parent-child Region tree invariants
SchedulerFairnessEnsures Cancel Lane gets proper priority
QuiescenceCheckConfirms all tasks finished before Region close
FinalizerOrderVerifies LIFO execution order of finalizers
CapabilityEscalationBlocks unauthorized capability upgrades
SporkInvariantChecks fork-spawned Region consistency
DeadlockFreedomDetects cycles in the wait-graph
ProgressGuaranteeValidates martingale progress certificates
SeedDeterminismEnsures identical seeds reproduce identical schedules
MemoryOrderingChecks acquire/release memory consistency
FuelExhaustionPrevents cancel propagation past fuel=0
SupervisorPolicyEnforces restart limits and backoff policies
EffectAtomicityEnsures two-phase effects commit atomically
Cancel Lifecycle

State Machine

Five states, four transitions, and Cancel Fuel that guarantees termination. Walk through the full cancellation lifecycle.

Cancel FuelCancel FuelMonotonically decreasing termination counterA counter that strictly decreases with each step of cancel propagation, serving as a termination proof. Like a rocket with a finite fuel tank — the cancel signal can only travel so far before the fuel runs out, mathematically guaranteeing that cancellation always finishes. is the key invariant. It's a counter that strictly decreases with each step of cancel propagation — like a rocket with a finite fuel tank. The formal proof is a supermartingaleSupermartingaleStochastic process proving terminationA mathematical sequence of values that, on average, decreases over time. Asupersync uses supermartingales as termination proofs: by showing that a certain measure of 'remaining work' is a supermartingale, it mathematically proves that tasks always make progress and eventually complete.: a mathematical sequence that, on average, can only go down. This guarantees that cancellation always terminates, even in adversarial workloads.

The five states — Running, CancelRequested, Cancelling (Drain), Finalizing, and Completed — form a strict total order. A task can only move forward through these states, never backward. Combined with the budget algebraBudget AlgebraProduct semiring for composing cancel budgetsCancel budgets compose algebraically as a product semiring. When regions nest, their budgets merge element-wise: min(deadlines), min(quotas), max(priority). This means the tightest deadline and smallest quota always win, while the highest priority propagates upward — guaranteeing consistent cancel timing across arbitrarily deep region hierarchies. that composes nested region budgets, this gives Asupersync its cancel-correctness guarantee.

Security

Macaroon Attenuation

Capabilities are bearer tokens with 8 caveat predicates. Anyone can add restrictions — nobody can remove them. Delegation is safe by construction.

Inspired by Google's Macaroons paper, Asupersync's capabilities use a decentralized attenuation modelMacaroonDecentralized capability token with attenuating caveatsA bearer credential (inspired by Google's Macaroons paper) where anyone holding the token can add restrictions (caveats) but can never remove them. In Asupersync, capabilities are Macaroon-based: you can delegate a capability to a child task while attenuating it (e.g., adding a deadline, restricting to a specific Region, or limiting to N uses). The 8 caveat predicates include TimeBefore, TimeAfter, RegionScope, TaskScope, MaxUses, ResourceScope, RateLimit, and Custom.. When you delegate a capability to a child task, you can restrict it by adding caveats — but you can never widen an existing capability. This makes delegation inherently safe: you don't need a central authority to verify that a delegated token is valid.

The 8 caveat predicates cover time bounds (TimeBefore, TimeAfter), spatial scoping (RegionScope, TaskScope), usage limits (MaxUses, RateLimit), resource patterns (ResourceScope), and a Custom escape hatch for application-specific restrictions.

Statistical Testing

E-Process Monitoring

Anytime-valid invariant monitoring via Ville's inequality. Peek at any time, reject the instant evidence is strong enough — no p-hacking penalty.

Traditional statistical tests require a fixed sample size: you commit to N observations upfront, then compute a p-value. Peeking at intermediate results invalidates the guarantee. E-processesE-ProcessAnytime-valid statistical monitor using betting martingalesA betting martingale that lets you continuously monitor a hypothesis and reject it the instant the evidence is strong enough — without any penalty for peeking. Unlike traditional p-values (which require a fixed sample size), an E-process stays valid no matter when you check it. Asupersync's Lab runtime uses E-processes to monitor three oracle invariants (task leak, obligation leak, quiescence) with guaranteed Type-I error control via Ville's inequality. solve this using a betting martingale: E_t = E_(t-1) × (1 + λ × (X_t - p₀)).

Via Ville's inequality, if the E-value ever exceeds 1/α, you know with mathematical certainty that the null hypothesis is false — regardless of when you checked. The Lab runtime monitors three critical invariants (task leak, obligation leak, quiescence) this way, with bet size λ=0.5 and null rate p₀=0.001.

Effect Safety

Saga Compensation

16 operation kinds, CALM-optimized coordination barriers, and automatic LIFO rollback. Distributed operations that unwind cleanly on failure.

Multi-step operations (create account → send email → charge card) are modeled as SagasSagaDistributed operation with automatic compensation on failureA pattern for long-running operations that span multiple steps. Each step has a forward action and a compensating action. If any step fails (or cancellation arrives), the Saga automatically runs compensating actions for all completed steps in LIFO order. Asupersync's Saga engine classifies its 16 operation kinds into monotone and non-monotone using CALM analysis, minimizing the coordination overhead.. Each step has a forward action and a compensating action. If any step fails or cancellation arrives, completed steps are unwound in LIFO order — the last completed step compensates first.

What makes this unique is CALM analysisCALM AnalysisIdentifies which operations need coordinationConsistency As Logical Monotonicity — a principle that determines which operations can run coordination-free (monotone operations like Reserve and Send) and which require barriers (non-monotone operations like Commit and Release). Asupersync's Saga engine uses CALM to automatically batch consecutive monotone steps, inserting coordination barriers only where the math demands it.. The Saga engine classifies each of its 16 operation kinds as monotoneMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis. (order-independent, like Reserve and Send) or non-monotone (barrier-required, like Commit and Release). Consecutive monotone operationsMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis. are batched without any coordination barriersCoordination BarrierSync point required before non-monotone operationsA synchronization point that the Saga engine inserts before non-monotone operations (like Commit or Release). Monotone operations (like Reserve and Send) can execute without barriers because they're order-independent. CALM analysis determines exactly where barriers are needed, minimizing unnecessary synchronization. — synchronization is inserted only where the math demands it.

Development

Build Timeline

From formal proofs to published crate.

Phase 1

Core Runtime & Region Model

  • Designed the Region tree model with parent-child lifetime scoping

  • Implemented the three-phase cancel protocol (Request → Drain → Finalize)

  • Built the Cx capability context with spawn/IO/timer gates

  • Formal proofs for cancel-safety and region closure guarantees

Runtime Log v0.2
Phase 2

Linear Obligations & Permits

  • Designed the Permit/Lease linear type system

  • Implemented compile-time must-use enforcement

  • Built the three consumption paths: Send, Complete, Abort

  • Added Spork (spawn + fork) for structured task hierarchies

Runtime Log v0.2
Phase 3

Three-Lane Scheduler

  • Implemented Cancel/Timed/Ready lane architecture

  • Built priority inversion prevention for cancel propagation

  • Designed quiescence detection for region closure

  • Benchmarked against Tokio's work-stealing scheduler

Runtime Log v0.2
Phase 4

Lab Runtime & DPOR

  • Built deterministic executor with seed-controlled ordering

  • Implemented Dynamic Partial Order Reduction for systematic testing

  • Added replay capability for reproducing concurrency bugs

  • Integrated with cargo test for seamless developer experience

Runtime Log v0.2
Phase 5

Built-in Protocols & Ecosystem

  • Implemented cancel-correct TCP, HTTP/1.1, and WebSocket protocols

  • Built cancel-aware channels, mutexes, and semaphores

  • Added supervisor trees with restart policies

  • Published to crates.io with comprehensive documentation

Runtime Log v0.2