Explore Asupersync's key concepts through interactive visualizations.
Click a regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree. to cancel it and watch the cancel cascade propagate through its children.
The visualization displays an Asupersync region treeRegion TreeHierarchy of nested scopesA family tree for tasks. Instead of tasks floating around independently, every task in Asupersync belongs to a 'Region'. When a parent Region is cancelled, the entire branch of child Regions and tasks is cleanly pruned from the bottom up. No task is ever orphaned., the foundational data structure behind structured concurrency. Every task in the runtime lives inside a regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree., and regions nest to form a strict hierarchy. When a parent region closes or is cancelled, all of its children receive a cancel signal automatically.
Click any region node to trigger cancellation. The cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem. propagates downward through the tree: each child enters the drain phaseDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget., runs its finalizersFinalize PhaseFinal cleanup after drainingThe Finalize phase runs after Drain completes (or the Budget expires). It executes registered finalizers in LIFO order — similar to defer in Go but tightly integrated with the cancel protocol., and then reports completion upward. This structured teardown eliminates orphaned tasks, a class of bugs that plagues runtimes like Tokio where tokio::spawn creates globally-scoped futures with no parent linkage.
The practical consequence is composability. Because region lifetimes are lexically scoped, you can reason about task ownership the same way you reason about variable ownership in Rust. If a region is not visible in your code, no task inside it can outlive your scope. See the architecture deep-dive for the formal model.
Step through the three-phase cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem. and compare it with Tokio's silent drop.
This visualization walks through Asupersync's three-phase cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem., the mechanism that replaces Tokio's implicit Drop-based cancellation. In Tokio, dropping a JoinHandle silently abandons the future mid-execution with no cleanup guarantee. Asupersync's protocol ensures that every cancelled task follows a structured shutdown sequence: Request, DrainDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget., FinalizeFinalize PhaseFinal cleanup after drainingThe Finalize phase runs after Drain completes (or the Budget expires). It executes registered finalizers in LIFO order — similar to defer in Go but tightly integrated with the cancel protocol..
During the Request phase, the runtime sets a flag on the task's CxCxCapability context tokenThe magic key passed to every task. It controls what operations that task may perform. Spawning, I/O, timers, and other effects are gated by capabilities on the Cx. This enforces the principle of least authority. token. The task can observe this flag at any checkpoint() call and begin graceful shutdown. The DrainDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget. phase gives the task a bounded budgetBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized. to flush buffers, close connections, and release locks. Finally, the FinalizeFinalize PhaseFinal cleanup after drainingThe Finalize phase runs after Drain completes (or the Budget expires). It executes registered finalizers in LIFO order — similar to defer in Go but tightly integrated with the cancel protocol. phase runs registered finalizers in LIFO order, guaranteeing that resources acquired last are released first.
Step through each phase in the visualization to see the state transitions. Then compare the Asupersync side with Tokio's behavior: notice how Tokio's drop produces no observable cleanup steps, while Asupersync's protocol generates a deterministic, auditable shutdown trace.
Reserve side-effects to make them reversible. Commit them permanently only when safe from cancellation.
Two-phase effectsTwo-Phase EffectReserve/commit pattern preventing partial side-effectsA pattern where side effects are split into two stages: Reserve (stage the effect, making it reversible) and Commit (apply it permanently). If cancellation arrives between Reserve and Commit, the reservation is automatically rolled back. Think of it like a bank placing a hold on your account before transferring — if the transfer is cancelled, the hold is simply released. solve a fundamental problem in cancellable systems: what happens to side-effects that are partially committed when cancellation arrives? In a traditional runtime, a function might send an email, then get cancelled before recording that it did so, leaving the system in an inconsistent state.
Asupersync splits every side-effect into two steps. The Reserve step stages the effect as a reversible intention, producing a handle the task can hold. The Commit step, which only succeeds if the task is not in a cancelled state, makes the effect permanent. If cancellation arrives between Reserve and Commit, the runtime automatically rolls back all reserved-but-uncommitted effects.
The visualization shows effects moving through this lifecycle. Try triggering a cancel during the window between reservation and commit to see the rollback in action. This pattern integrates with the obligation systemObligation SystemFramework ensuring all resources are consumedThe formal framework that tracks every Permit and Lease in the system and guarantees they are all explicitly consumed before their owning scope exits. If a resource escapes without being handled, the compiler or runtime catches it — no resource can silently leak.: each reserved effect produces a permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. that must be either committed or explicitly dropped, making leaked effects a compile-time error.
Toggle capabilities on the CxCxCapability context tokenThe magic key passed to every task. It controls what operations that task may perform. Spawning, I/O, timers, and other effects are gated by capabilities on the Cx. This enforces the principle of least authority. token and see how the runtime intercepts unauthorized operations.
Capability securityCapability SecurityNeed-to-know task managementA 'Zero Trust' model for concurrency. Tasks receive a 'Cx' token that explicitly grants them permissions—like the ability to spawn other tasks, use the network, or access timers. If a task is hijacked or buggy, its damage is contained to its specific capabilities. is Asupersync's enforcement mechanism for the principle of least authority. Every task receives a CxCxCapability context tokenThe magic key passed to every task. It controls what operations that task may perform. Spawning, I/O, timers, and other effects are gated by capabilities on the Cx. This enforces the principle of least authority. token at spawn time, and that token carries a specific capability tierCapability TierHierarchical permission level for tasksA five-level permission hierarchy: FiberCap (compute-only), TaskCap (can spawn children), IoCap (network/filesystem access), RemoteCap (cross-machine communication), and SupervisorCap (can manage and restart other tasks). Each tier extends the one below it, enforcing the principle of least authority. that determines what operations the task is permitted to perform. A task with FiberCapFiberCapMinimal capability: compute-only, no I/OThe lowest capability tier. A task with only FiberCap can perform pure computation — no spawning, no I/O, no timers. It's the most restricted permission level, ideal for untrusted or sandboxed computations. can only compute. A task with IoCapIoCapCapability for network and filesystem accessA mid-tier capability that extends TaskCap with permission to perform I/O operations: reading and writing files, opening network connections, and setting timers. Required for any task that needs to interact with the outside world. can additionally access the network and filesystem.
Toggle individual capabilities in the visualization to see the runtime's response. When a task attempts an operation outside its tier, the runtime does not panic or silently fail. Instead, the capability gateCapability GatePermission check on CxA check on the Cx that determines whether a task has permission to perform a specific operation. Gates can be spawn, io, timer, or even custom application-defined capabilities. intercepts the call and returns a typed error, allowing the task to handle the denial gracefully.
This design eliminates an entire category of security vulnerabilities. Third-party library code that you spawn with FiberCapFiberCapMinimal capability: compute-only, no I/OThe lowest capability tier. A task with only FiberCap can perform pure computation — no spawning, no I/O, no timers. It's the most restricted permission level, ideal for untrusted or sandboxed computations. cannot exfiltrate data over the network, because the I/O capability simply does not exist in its token. The restriction is structural, not policy-based, so it cannot be bypassed by malicious code at runtime.
Capabilities behave like MacaroonsMacaroonDecentralized capability token with attenuating caveatsA bearer credential (inspired by Google's Macaroons paper) where anyone holding the token can add restrictions (caveats) but can never remove them. In Asupersync, capabilities are Macaroon-based: you can delegate a capability to a child task while attenuating it (e.g., adding a deadline, restricting to a specific Region, or limiting to N uses). The 8 caveat predicates include TimeBefore, TimeAfter, RegionScope, TaskScope, MaxUses, ResourceScope, RateLimit, and Custom.. A child can add further restrictions, but can never strip the parent's constraints away.
This visualization demonstrates Asupersync's MacaroonMacaroonDecentralized capability token with attenuating caveatsA bearer credential (inspired by Google's Macaroons paper) where anyone holding the token can add restrictions (caveats) but can never remove them. In Asupersync, capabilities are Macaroon-based: you can delegate a capability to a child task while attenuating it (e.g., adding a deadline, restricting to a specific Region, or limiting to N uses). The 8 caveat predicates include TimeBefore, TimeAfter, RegionScope, TaskScope, MaxUses, ResourceScope, RateLimit, and Custom.-inspired capability delegation model. When a parent task delegates a capability to a child, it can attach additional restrictions (caveats) that narrow the child's permissions. The critical invariant: a child can never widen a capability it received. Delegation is a one-way ratchet toward less privilege.
The mechanism works without a central authority. Each caveat is cryptographically chained to the previous one, so any recipient can verify the full chain of restrictions locally. This means you can safely pass capabilities across trust boundaries, because the math guarantees that tampering with the caveat chain invalidates the entire token.
Try adding caveats in the visualization and observe how they stack. A parent that grants IoCapIoCapCapability for network and filesystem accessA mid-tier capability that extends TaskCap with permission to perform I/O operations: reading and writing files, opening network connections, and setting timers. Required for any task that needs to interact with the outside world. with a “read-only filesystem” caveat produces a child token that can never write to disk, regardless of what further delegations occur downstream. This composable restriction model is what makes Asupersync's capability securityCapability SecurityNeed-to-know task managementA 'Zero Trust' model for concurrency. Tasks receive a 'Cx' token that explicitly grants them permissions—like the ability to spawn other tasks, use the network, or access timers. If a task is hijacked or buggy, its damage is contained to its specific capabilities. practical at scale.
Cancel budgetsBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized. mathematically compose downwards across nested regionsRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree. using a Product SemiringProduct SemiringAlgebraic structure for budget compositionA mathematical structure where budgets compose element-wise using min and max operations. It's called a 'semiring' because it has two combining operations (like addition and multiplication, but using min and max instead). This gives budget composition clean algebraic properties: it's associative, commutative, and has identity elements..
Budget algebraBudget AlgebraProduct semiring for composing cancel budgetsCancel budgets compose algebraically as a product semiring. When regions nest, their budgets merge element-wise: min(deadlines), min(quotas), max(priority). This means the tightest deadline and smallest quota always win, while the highest priority propagates upward — guaranteeing consistent cancel timing across arbitrarily deep region hierarchies. governs how cancellation time limits propagate through the region treeRegion TreeHierarchy of nested scopesA family tree for tasks. Instead of tasks floating around independently, every task in Asupersync belongs to a 'Region'. When a parent Region is cancelled, the entire branch of child Regions and tasks is cleanly pruned from the bottom up. No task is ever orphaned.. Each regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree. declares a budgetBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized.: the maximum time (or ticks) its tasks are allowed to spend in the drain phaseDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget. during cancellation. When regions nest, their budgets compose using a product semiringProduct SemiringAlgebraic structure for budget compositionA mathematical structure where budgets compose element-wise using min and max operations. It's called a 'semiring' because it has two combining operations (like addition and multiplication, but using min and max instead). This gives budget composition clean algebraic properties: it's associative, commutative, and has identity elements., which takes the minimum of parent and child budgets.
This algebraic composition ensures a global invariant: no nested region can drain longer than its parent permits. If a parent region has a 100ms drain budget and a child declares 500ms, the effective child budget is clamped to 100ms. The composition is associative and commutative, so the nesting depth does not affect the computation's correctness.
Adjust the budget values in the visualization to see how they propagate. Notice that the effective budget at any leaf node is always the minimum along its path from the root. This property is what makes cancel fuelCancel FuelMonotonically decreasing termination counterA counter that strictly decreases with each step of cancel propagation, serving as a termination proof. Like a rocket with a finite fuel tank — the cancel signal can only travel so far before the fuel runs out, mathematically guaranteeing that cancellation always finishes. a termination proof: the Lyapunov potentialLyapunov Potential4-component energy function proving system progressA formal 'energy measure' composed of four terms: live task count, pending obligation age, draining region count, and deadline pressure. Like a ball's potential energy decreasing as it rolls downhill, this function strictly decreases with each scheduler step — mathematically proving the runtime always makes progress toward quiescence. The scheduler can tune weights (obligation-focused vs deadline-focused) to prioritize different workload patterns. is bounded by the root budget, guaranteeing that the entire cancel cascade completes in finite time.
Watch the Fiedler valueFiedler ValueEigenvalue that signals deadlock riskThe second-smallest eigenvalue of the wait-graph's Laplacian matrix. When it's large, tasks flow freely (no bottleneck). When it drops near zero, the graph is about to split into disconnected components — meaning a deadlock is forming. Think of it as a 'traffic congestion score' for your concurrent system. plummet as a cyclic wait-graphWait-GraphDirected graph of task dependenciesA directed graph where nodes are tasks and edges represent 'task A is waiting on task B'. The runtime maintains this graph in real time and uses spectral analysis on its Laplacian matrix to detect deadlocks before they fully form. forms, triggering proactive intervention.
Spectral deadlock detectionSpectral Wait-Graph AnalysisEigenvalue-based deadlock predictionA technique that analyzes the eigenvalues of the wait-graph's Laplacian matrix to predict deadlocks before they fully form. Instead of waiting for tasks to actually freeze, it detects the mathematical signature of an emerging deadlock (the Fiedler value approaching zero) and intervenes proactively. monitors the runtime's wait-graphWait-GraphDirected graph of task dependenciesA directed graph where nodes are tasks and edges represent 'task A is waiting on task B'. The runtime maintains this graph in real time and uses spectral analysis on its Laplacian matrix to detect deadlocks before they fully form. in real time by computing the second-smallest eigenvalue of its Laplacian matrix, known as the Fiedler valueFiedler ValueEigenvalue that signals deadlock riskThe second-smallest eigenvalue of the wait-graph's Laplacian matrix. When it's large, tasks flow freely (no bottleneck). When it drops near zero, the graph is about to split into disconnected components — meaning a deadlock is forming. Think of it as a 'traffic congestion score' for your concurrent system.. A connected graph with no cycles has a positive Fiedler value. As a deadlock cycle forms, the Fiedler value drops toward zero, giving the runtime advance warning before tasks actually block.
The visualization builds a wait-graph incrementally. Watch the Fiedler value in the readout as edges are added. When it crosses the critical threshold, the runtime triggers intervention: it identifies the task in the cycle with the lowest priority and cancels it via the cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem., breaking the deadlock without operator intervention.
Traditional deadlock detectors are reactive: they find cycles after tasks are already stuck. Spectral analysis is predictive. The Fiedler valueFiedler ValueEigenvalue that signals deadlock riskThe second-smallest eigenvalue of the wait-graph's Laplacian matrix. When it's large, tasks flow freely (no bottleneck). When it drops near zero, the graph is about to split into disconnected components — meaning a deadlock is forming. Think of it as a 'traffic congestion score' for your concurrent system. functions as a continuous health metric, analogous to monitoring CPU temperature rather than waiting for a thermal shutdown. This approach catches near-deadlocks, situations where tasks are one edge away from a cycle, before they become production incidents.
Rateless erasure coding over lossy channels perfectly reconstructs data without retransmission requests.
Fountain codesFountain CodeRateless erasure code for data transferA class of erasure codes where the sender generates a potentially infinite stream of encoded symbols, and the receiver can reconstruct the original data from any sufficient subset. Like a magic fountain — you just need to collect enough drops, regardless of which specific drops you catch.(specifically RaptorQRaptorQRFC 6330 fountain code for erasure-coded transferAn advanced fountain code standardized in RFC 6330 that Asupersync uses for erasure-coded data transfer. It generates repair symbols so efficiently that receivers can reconstruct original data from any sufficient subset of symbols, with less than 1% overhead beyond the theoretical minimum.) allow Asupersync to transfer data reliably across lossy channels without acknowledgments or retransmission requests. The sender encodes the original data into a potentially infinite stream of coded symbols. The receiver can reconstruct the original data from any sufficient subset of received symbols, regardless of which specific symbols were lost.
The visualization shows symbols flowing from sender to receiver with a configurable packet loss rate. Unlike TCP, which stalls on every dropped packet to request retransmission, RaptorQRaptorQRFC 6330 fountain code for erasure-coded transferAn advanced fountain code standardized in RFC 6330 that Asupersync uses for erasure-coded data transfer. It generates repair symbols so efficiently that receivers can reconstruct original data from any sufficient subset of symbols, with less than 1% overhead beyond the theoretical minimum. continues transmitting new coded symbols. The receiver accumulates symbols until it has slightly more than the original data size (typically 0.2% overhead), then reconstructs the complete message in one pass using the Luby transformLuby TransformDegree distribution for fountain codesA specific probability distribution that determines how many source symbols are combined into each encoded symbol in LT codes (a type of fountain code). Named after Michael Luby, it's the mathematical trick that makes fountain codes work efficiently with near-optimal overhead..
This property makes fountain codes ideal for cancel-heavy workloads where connections may be interrupted at any moment. If a cancel signal arrives mid-transfer, the receiver either has enough symbols to decode (and the data is fully recovered) or it does not (and the partial transfer is cleanly discarded). There is no half-received state to reconcile.
GenServersGenServerGeneric Server for stateful actorsA battle-tested server pattern borrowed from Erlang/OTP used to encapsulate state, handle concurrent requests, and manage a mailbox. In Asupersync, GenServers are supercharged with compile-time Reply Obligations, meaning a server can mathematically never 'forget' to respond to a client. that mathematically cannot forget to reply to clients.
SporkSporkSpawn + Fork primitiveA portmanteau of Spawn and Fork. It creates a new task while simultaneously branching off a new child Region. The new task inherits a subset of the parent's capabilities and is tightly bound to its new Region's lifecycle. is Asupersync's actor framework, inspired by Erlang/OTP but with a critical fix: the linear obligationLinear ObligationsMandatory return policy for resourcesA strict resource management system backed by Rust's type system. Think of it as a library book you must explicitly return. If a code path forgets to consume a resource (like a Permit or Lease), the program refuses to compile, preventing resource leaks at compile-time. system guarantees that every GenServerGenServerGeneric Server for stateful actorsA battle-tested server pattern borrowed from Erlang/OTP used to encapsulate state, handle concurrent requests, and manage a mailbox. In Asupersync, GenServers are supercharged with compile-time Reply Obligations, meaning a server can mathematically never 'forget' to respond to a client. call receives a reply. In Erlang, a gen_server that crashes or forgets to call gen_server:reply/2 leaves the caller hanging indefinitely. In Spork, the reply obligation is a permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. that must be consumed before the handler returns, enforced at compile time.
The visualization shows message flow through a Spork GenServerGenServerGeneric Server for stateful actorsA battle-tested server pattern borrowed from Erlang/OTP used to encapsulate state, handle concurrent requests, and manage a mailbox. In Asupersync, GenServers are supercharged with compile-time Reply Obligations, meaning a server can mathematically never 'forget' to respond to a client.. Each incoming call produces a reply permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create., and the mailboxMailboxMessage queue for actor tasksThe receiving queue for a GenServer. While standard Erlang uses unbounded mailboxes that can silently grow until the system runs out of memory and crashes, Asupersync mailboxes are explicitly bounded and enforce backpressure safely. tracks which permits are outstanding. If a handler attempts to return without consuming the reply permit, the ObligationLeak oracleObligationLeak OracleCatches leaked Permits and LeasesA test oracle that monitors for Permits or Leases that are dropped without being explicitly consumed (sent, completed, or aborted). When it catches one, it reports exactly which obligation was leaked and where it was created. catches it during LabLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. testing.
SporkSporkSpawn + Fork primitiveA portmanteau of Spawn and Fork. It creates a new task while simultaneously branching off a new child Region. The new task inherits a subset of the parent's capabilities and is tightly bound to its new Region's lifecycle. also integrates with the region treeRegion TreeHierarchy of nested scopesA family tree for tasks. Instead of tasks floating around independently, every task in Asupersync belongs to a 'Region'. When a parent Region is cancelled, the entire branch of child Regions and tasks is cleanly pruned from the bottom up. No task is ever orphaned.. Each GenServerGenServerGeneric Server for stateful actorsA battle-tested server pattern borrowed from Erlang/OTP used to encapsulate state, handle concurrent requests, and manage a mailbox. In Asupersync, GenServers are supercharged with compile-time Reply Obligations, meaning a server can mathematically never 'forget' to respond to a client. lives inside a regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree., and its supervisorSupervisorFault-tolerant task managerA task that monitors child tasks and applies restart policies when they fail. Crucially, Supervisors respect the cancel protocol: during Region cancellation, they stop restarting and allow children to drain gracefully. can restart it using the cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem. rather than Erlang's abrupt kill-and-restart. This means in-flight messages are drained and finalizers run before the new instance starts, eliminating the lost-message window that plagues OTP restarts.
See how the runtime pushes synchronization barriers out of the hot path for monotone operationsMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis..
The CALM theoremCALM TheoremConsistency As Logical MonotonicityA formal theorem from distributed systems theory: a program can be made eventually consistent without coordination if and only if it is monotone (i.e., it never retracts information). Asupersync applies this to classify operations: monotone operations (Reserve, Send, Acquire) can execute lock-free, while non-monotone operations (Commit, Release, RegionClose) require synchronization barriers. (Consistency As Logical Monotonicity) states that monotone operationsMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis., those whose outputs only grow as inputs grow, can be computed without coordination barriersCoordination BarrierSync point required before non-monotone operationsA synchronization point that the Saga engine inserts before non-monotone operations (like Commit or Release). Monotone operations (like Reserve and Send) can execute without barriers because they're order-independent. CALM analysis determines exactly where barriers are needed, minimizing unnecessary synchronization.. Asupersync's runtime performs CALM analysisCALM AnalysisIdentifies which operations need coordinationConsistency As Logical Monotonicity — a principle that determines which operations can run coordination-free (monotone operations like Reserve and Send) and which require barriers (non-monotone operations like Commit and Release). Asupersync's Saga engine uses CALM to automatically batch consecutive monotone steps, inserting coordination barriers only where the math demands it. on each operation to determine whether it requires synchronization. Operations classified as monotone (like appending to a log or adding to a set) execute without barriers, while non-monotone operationsNon-Monotone OperationBarrier-requiring actionAn operation that checks a threshold or relies on negation (e.g., 'close this region only if zero tasks remain'). Because it must ensure no other thread is secretly adding a task while it checks, it fundamentally requires a heavy synchronization lock to execute safely. (like reading a counter's final value) insert barriers only where the math demands them.
The visualization shows a stream of operations being classified and scheduled. Consecutive monotone operationsMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis. are batched and executed concurrently. When a non-monotone operationNon-Monotone OperationBarrier-requiring actionAn operation that checks a threshold or relies on negation (e.g., 'close this region only if zero tasks remain'). Because it must ensure no other thread is secretly adding a task while it checks, it fundamentally requires a heavy synchronization lock to execute safely. appears, the runtime inserts a coordination barrierCoordination BarrierSync point required before non-monotone operationsA synchronization point that the Saga engine inserts before non-monotone operations (like Commit or Release). Monotone operations (like Reserve and Send) can execute without barriers because they're order-independent. CALM analysis determines exactly where barriers are needed, minimizing unnecessary synchronization., waits for all in-flight monotone operations to complete, then proceeds.
This optimization is not a heuristic. The CALM theoremCALM TheoremConsistency As Logical MonotonicityA formal theorem from distributed systems theory: a program can be made eventually consistent without coordination if and only if it is monotone (i.e., it never retracts information). Asupersync applies this to classify operations: monotone operations (Reserve, Send, Acquire) can execute lock-free, while non-monotone operations (Commit, Release, RegionClose) require synchronization barriers. provides a mathematical guarantee that removing these barriers does not affect correctness. The result is higher throughput without sacrificing consistency, because synchronization cost is paid only where it is provably necessary.
A mathematical energy function that strictly decreases over time, proving the system will reach quiescenceQuiescenceWhen a region goes completely silentThe state of a Region when all its tasks have finished, and no new tasks can possibly be created inside it. It's the equivalent of making sure everyone has left the building and the doors are locked before turning off the lights..
A Lyapunov potentialLyapunov Potential4-component energy function proving system progressA formal 'energy measure' composed of four terms: live task count, pending obligation age, draining region count, and deadline pressure. Like a ball's potential energy decreasing as it rolls downhill, this function strictly decreases with each scheduler step — mathematically proving the runtime always makes progress toward quiescence. The scheduler can tune weights (obligation-focused vs deadline-focused) to prioritize different workload patterns. is a mathematical energy function assigned to the runtime's state. The key property: every scheduler step either decreases the potential or leaves it unchanged, and the potential is bounded below by zero. This strict monotonic decrease proves that the system must eventually reach quiescenceQuiescenceWhen a region goes completely silentThe state of a Region when all its tasks have finished, and no new tasks can possibly be created inside it. It's the equivalent of making sure everyone has left the building and the doors are locked before turning off the lights., a state where no more work remains.
The visualization plots the potential value over time as the scheduler processes tasks. Each completed task, each drained regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree., and each consumed permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. reduces the potential. The curve approaches zero asymptotically, and the formal proof guarantees it reaches zero in finite steps.
This is not just a theoretical curiosity. The Lyapunov potentialLyapunov Potential4-component energy function proving system progressA formal 'energy measure' composed of four terms: live task count, pending obligation age, draining region count, and deadline pressure. Like a ball's potential energy decreasing as it rolls downhill, this function strictly decreases with each scheduler step — mathematically proving the runtime always makes progress toward quiescence. The scheduler can tune weights (obligation-focused vs deadline-focused) to prioritize different workload patterns. serves as a progress metric in production. If the potential stops decreasing, something is wrong: a task may be livelock-spinning, a regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree. may be failing to drain, or a permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. may have leaked. The Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. checks this invariant via a martingale progress certificateMartingale Progress CertificateProbabilistic proof of task completionA formal certificate based on supermartingale theory that proves tasks make monotonic progress toward completion. Like a ball rolling downhill — the certificate mathematically guarantees the ball can never roll back up, ensuring every task eventually finishes or is cleanly cancelled., catching liveness violations during testing rather than in production.
Multi-armed bandit algorithm continuously tunes the scheduler's cancel-streak limits based on real-time regretRegret BoundConvergence guarantee for adaptive schedulingA mathematical proof that the EXP3 adaptive scheduler's cumulative 'regret' (difference between its performance and the best fixed strategy in hindsight) grows sublinearly. In practice, this means the scheduler converges to optimal behavior — it can't be fooled for long by adversarial workloads. minimization.
The EXP3 schedulerEXP3 SchedulerAdaptive scheduling via exponential weightsAn online learning algorithm (Exponential-weight algorithm for Exploration and Exploitation) that adapts the scheduler's behavior based on observed cancellation patterns. The scheduler maintains 5 arms [4, 8, 16, 32, 64] representing cancel-streak thresholds. If certain thresholds consistently perform better, EXP3 shifts weight toward them with exploration rate γ=0.07, converging toward the optimal strategy over time. treats scheduling policy as an online learning problem. Each scheduling decision (how many consecutive cancel tasks to process before switching to ready tasks, for example) is an “arm” in a multi-armed bandit. The EXP3 algorithm assigns weights to each arm based on observed rewards (throughput, latency, cancel responsiveness) and selects arms with probability proportional to their weights.
The visualization shows the arm weights updating in real time as the scheduler processes tasks. Arms that produce good outcomes gain weight; arms that produce poor outcomes lose weight. The regret boundRegret BoundConvergence guarantee for adaptive schedulingA mathematical proof that the EXP3 adaptive scheduler's cumulative 'regret' (difference between its performance and the best fixed strategy in hindsight) grows sublinearly. In practice, this means the scheduler converges to optimal behavior — it can't be fooled for long by adversarial workloads. guarantees that the scheduler's cumulative performance converges to within a logarithmic factor of the best fixed policy in hindsight, even if the workload changes adversely.
Unlike static scheduling policies that must be tuned by operators, the EXP3 approach adapts automatically. A workload that suddenly becomes cancel-heavy will shift weight toward longer cancel streaks within milliseconds. The hedge algorithmHedge AlgorithmOnline learning for cancel-streak thresholdsAn online learning algorithm that tunes the scheduler's cancel-streak detection thresholds. When the runtime sees repeated cancellations in a region, Hedge adjusts how aggressively the scheduler pre-empts work there, minimizing wasted computation without over-reacting to transient patterns. variant provides even tighter bounds when the number of arms is small, making this practical for the three-lane schedulerThree-Lane SchedulerCancel / Timed / Ready prioritiesA traffic control system for tasks. The 'Cancel Lane' is the ambulance: it always gets priority so cleanups happen instantly. The 'Timed Lane' is for scheduled events, and the 'Ready Lane' handles normal traffic.'s finite set of configuration knobs.
The Lab RuntimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. drops cancel bombs at every single await point to mathematically prove your application survives interruption.
Systematic cancellation testing replaces ad-hoc timeout tests with exhaustive verification. The Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. instruments your code to intercept every checkpoint() call, then replays the test multiple times, injecting a cancellation at a different await point on each replay. If your code has N await points, the Lab runs N+1 executions: one clean run and one cancellation at each point.
The visualization shows these injection points lighting up as the Lab works through them. Each injected cancellation triggers the full cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem., and the suite of test oraclesTest OracleRuntime correctness monitor for Lab testingA specialized runtime monitor that checks a specific class of correctness invariant during Lab testing. Each of the 17 built-in oracles watches for a different kind of bug — from resource leaks to protocol violations to budget overruns — acting as an automated auditor for your concurrent code. (including the TaskLeakTaskLeak OracleDetects tasks escaping their Region scopeA test oracle that catches tasks which outlive their owning Region — the concurrent equivalent of a dangling pointer. If a task is still running after its Region has closed, the TaskLeak oracle flags it immediately, preventing 'zombie' tasks from silently consuming resources., ObligationLeakObligationLeak OracleCatches leaked Permits and LeasesA test oracle that monitors for Permits or Leases that are dropped without being explicitly consumed (sent, completed, or aborted). When it catches one, it reports exactly which obligation was leaked and where it was created., and CancelProtocolCancelProtocol OracleVerifies tasks obey the 3-phase cancel contractA specialized test oracle that monitors every task during Lab testing to ensure it correctly follows the three-phase cancel protocol. If a task tries to skip the Drain phase or ignores a cancel request, the oracle catches it immediately and reports the exact violation. oracles) verifies that the cancellation was handled correctly.
Combined with DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. search pruning and seedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay.-deterministic scheduling, this approach provides a mathematical guarantee: if no oracle fires across all injection points and all explored interleavings, your code is cancel-correctCancel-CorrectGuaranteed safe shutdownIn standard Rust, dropping a future might instantly stop it mid-execution, leaving connections open or data half-written. 'Cancel-Correct' means the runtime enforces a graceful 3-phase shutdown: tasks are asked to stop, given time to clean up, and finally swept away safely.. This is not a confidence interval or a probabilistic claim. It is an exhaustive proof over the tested input space.
Anytime-valid statistical monitors that continuously evaluate runtime invariants and instantly halt tests upon violation.
E-processE-ProcessAnytime-valid statistical monitor using betting martingalesA betting martingale that lets you continuously monitor a hypothesis and reject it the instant the evidence is strong enough — without any penalty for peeking. Unlike traditional p-values (which require a fixed sample size), an E-process stays valid no matter when you check it. Asupersync's Lab runtime uses E-processes to monitor three oracle invariants (task leak, obligation leak, quiescence) with guaranteed Type-I error control via Ville's inequality. test oraclesTest OracleRuntime correctness monitor for Lab testingA specialized runtime monitor that checks a specific class of correctness invariant during Lab testing. Each of the 17 built-in oracles watches for a different kind of bug — from resource leaks to protocol violations to budget overruns — acting as an automated auditor for your concurrent code. are statistical monitors that run continuously alongside your tests, checking runtime invariants without requiring a fixed sample size. Traditional hypothesis tests commit to N observations upfront; peeking at intermediate results invalidates the statistical guarantee. E-processes use a betting supermartingaleSupermartingaleStochastic process proving terminationA mathematical sequence of values that, on average, decreases over time. Asupersync uses supermartingales as termination proofs: by showing that a certain measure of 'remaining work' is a supermartingale, it mathematically proves that tasks always make progress and eventually complete. that remains valid at every observation, allowing the oracle to halt the test the instant a violation is detected.
The visualization shows multiple oracles monitoring a running test. Each oracle tracks a specific invariant: the TaskLeak oracleTaskLeak OracleDetects tasks escaping their Region scopeA test oracle that catches tasks which outlive their owning Region — the concurrent equivalent of a dangling pointer. If a task is still running after its Region has closed, the TaskLeak oracle flags it immediately, preventing 'zombie' tasks from silently consuming resources. watches for tasks escaping their regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree. scope, the ObligationLeak oracleObligationLeak OracleCatches leaked Permits and LeasesA test oracle that monitors for Permits or Leases that are dropped without being explicitly consumed (sent, completed, or aborted). When it catches one, it reports exactly which obligation was leaked and where it was created. catches dropped permitsPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create., and the CancelProtocol oracleCancelProtocol OracleVerifies tasks obey the 3-phase cancel contractA specialized test oracle that monitors every task during Lab testing to ensure it correctly follows the three-phase cancel protocol. If a task tries to skip the Drain phase or ignores a cancel request, the oracle catches it immediately and reports the exact violation. verifies that every cancellation follows the three-phase contract.
The anytime-valid property is critical for LabLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. testing, where the number of scheduling interleavings is not known in advance. The LabLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. explores interleavings until either an oracle fires or DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. reports that all distinct interleavings have been covered. Because E-processes maintain their statistical validity at every stopping time, there is no risk of false positives from early termination.
Canonicalizing execution traces to prune DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. search spaces without missing any bugs.
A Foata fingerprintFoata FingerprintCanonical hash of concurrent execution tracesA technique for canonicalizing concurrent execution traces using Foata normal form. Two interleavings that differ only in the order of independent (non-conflicting) operations produce the same fingerprint. DPOR uses this to detect when two different schedules are actually equivalent — pruning redundant exploration without missing real bugs. is a canonical hash of a concurrent execution trace. Two traces that differ only in the ordering of independent (non-conflicting) operations produce the same fingerprint. This is because the fingerprint is computed over the trace's Mazurkiewicz traceMazurkiewicz TraceEquivalence class of concurrent interleavingsA concept from concurrency theory: two execution traces are 'Mazurkiewicz equivalent' if they differ only in the order of independent (non-conflicting) operations. Asupersync's DPOR uses Mazurkiewicz traces to avoid redundantly testing schedules that produce identical outcomes — a massive reduction in the search space for concurrency testing. equivalence class: the set of all orderings that differ only in how independent events are interleaved.
The visualization shows multiple execution traces being reduced to their Foata normal form. In this normal form, independent events are grouped into “layers” where all events within a layer can execute in any order. The fingerprint is then a hash of these layers, ignoring intra-layer ordering.
This technique is essential for DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. efficiency. Without fingerprinting, the search algorithm would explore many traces that are equivalent (same conflicts, same outcomes, different ordering of independent steps). By comparing Foata fingerprintsFoata FingerprintCanonical hash of concurrent execution tracesA technique for canonicalizing concurrent execution traces using Foata normal form. Two interleavings that differ only in the order of independent (non-conflicting) operations produce the same fingerprint. DPOR uses this to detect when two different schedules are actually equivalent — pruning redundant exploration without missing real bugs., the Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. detects and skips equivalent traces, reducing the search space by orders of magnitude while guaranteeing that no distinct behavior is missed.
Drawing distribution-free mathematical boundaries to flag anomalous tasks with absolute certainty.
Conformal calibrationConformal CalibrationDistribution-free prediction intervals for Lab metricsA statistical technique that produces valid prediction intervals without assuming any particular data distribution. The Lab runtime uses conformal calibration to set alert thresholds on runtime metrics — if a test run's timing falls outside the conformal band, the oracle flags it as anomalous, even if the underlying distribution is completely unknown. constructs prediction intervals around expected task behavior without assuming any particular probability distribution. Given a calibration set of observed task durations (or resource usage, or any scalar metric), conformal prediction outputs an interval such that a new observation falls inside the interval with a user-specified probability, say 95%. If a task's metric falls outside this interval, it is flagged as anomalous.
The visualization plots task metrics with the conformal prediction band overlaid. Points inside the band are normal; points outside are flagged. The width of the band adapts to the observed data, widening when variance is high and narrowing when behavior is consistent.
The distribution-free guarantee is what distinguishes conformal calibration from parametric approaches. You do not need to assume Gaussian, exponential, or any other distributional form. The coverage guarantee holds for any data distribution, making it robust against the heavy-tailed latency distributions common in async systems. The Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. uses conformal calibration to set adaptive thresholds for test oraclesTest OracleRuntime correctness monitor for Lab testingA specialized runtime monitor that checks a specific class of correctness invariant during Lab testing. Each of the 17 built-in oracles watches for a different kind of bug — from resource leaks to protocol violations to budget overruns — acting as an automated auditor for your concurrent code., replacing hand-tuned timeout constants with statistically grounded bounds.
A strictly decreasing budgetBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized. that mathematically guarantees a cancellation cascade will not spin infinitely.
Cancel fuelCancel FuelMonotonically decreasing termination counterA counter that strictly decreases with each step of cancel propagation, serving as a termination proof. Like a rocket with a finite fuel tank — the cancel signal can only travel so far before the fuel runs out, mathematically guaranteeing that cancellation always finishes. is a non-negative integer assigned to each cancel propagation event. Every step of the cancellation (entering a child regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree., notifying a task, running a finalizer) costs one unit of fuel. When fuel reaches zero, propagation stops. This simple mechanism provides a termination proof: the cancel cascade is guaranteed to complete in at most N steps, where N is the initial fuel value.
The visualization shows fuel depleting as cancellation propagates through nested regionsRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree.. Each region consumes fuel proportional to the number of tasks it contains. The initial fuel value is derived from the budget algebraBudget AlgebraProduct semiring for composing cancel budgetsCancel budgets compose algebraically as a product semiring. When regions nest, their budgets merge element-wise: min(deadlines), min(quotas), max(priority). This means the tightest deadline and smallest quota always win, while the highest priority propagates upward — guaranteeing consistent cancel timing across arbitrarily deep region hierarchies., which composes parent and child budgetsBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized. via the product semiringProduct SemiringAlgebraic structure for budget compositionA mathematical structure where budgets compose element-wise using min and max operations. It's called a 'semiring' because it has two combining operations (like addition and multiplication, but using min and max instead). This gives budget composition clean algebraic properties: it's associative, commutative, and has identity elements..
The formal proof that cancel fuel guarantees termination is a supermartingaleSupermartingaleStochastic process proving terminationA mathematical sequence of values that, on average, decreases over time. Asupersync uses supermartingales as termination proofs: by showing that a certain measure of 'remaining work' is a supermartingale, it mathematically proves that tasks always make progress and eventually complete. argument: the fuel value is a non-negative quantity that strictly decreases with each step. Because a non-negative integer sequence that strictly decreases must reach zero in finite steps, the cascade terminates. This proof is mechanically verified in Lean 4 as part of Asupersync's small-step semanticsSmall-Step SemanticsFormal rules defining single computation stepsA mathematical framework where each computation step is defined by a transition rule of the form ⟨e, σ⟩ → ⟨e′, σ′⟩ (expression e in state σ steps to expression e′ in state σ′). Asupersync has 35 such rules covering spawning, cancellation, effect reservation, region closure, and more — each mechanized in Lean 4 for machine-checked correctness..
Resolving concurrent races mathematically. Simultaneous crashes are sorted deterministically, guaranteeing perfect replay.
Trace replay stability ensures that a recorded execution trace can be replayed identically, even when the original execution contained races. The key challenge is tie-breaking: when two events occur “simultaneously” (within the same scheduler tick), the replay engine must resolve them in the same order every time. Asupersync uses a deterministic tie-breaking rule based on task IDs and regionRegionA structured concurrency scopeA lifetime-bounded scope that owns tasks. When a Region closes, all tasks within it are cancelled (via the cancel protocol), awaited to completion, and their resources cleaned up. Regions nest hierarchically, forming a tree. depth, producing a total order over concurrent events.
The visualization shows concurrent events arriving at the same tick and being sorted into a deterministic replay order. Regardless of the order in which the operating system delivers these events, the replay engine produces the same sequence. This determinism is seedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay.-dependent: the same seedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay. always produces the same tie-breaking decisions.
Stable replay is the foundation of Asupersync's debugging workflow. When a LabLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. test discovers a bug under a specific seedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay., you can replay that exact execution as many times as needed to diagnose the root cause. Without stable tie-breaking, replays would diverge from the original execution at the first race, making the recorded trace useless for debugging.
Automatic LIFO compensation of forward actions when long-running workflows fail or are cancelled.
SagasSagaDistributed operation with automatic compensation on failureA pattern for long-running operations that span multiple steps. Each step has a forward action and a compensating action. If any step fails (or cancellation arrives), the Saga automatically runs compensating actions for all completed steps in LIFO order. Asupersync's Saga engine classifies its 16 operation kinds into monotone and non-monotone using CALM analysis, minimizing the coordination overhead. model multi-step distributed workflows where each step has a forward action and a compensating (undo) action. If any step fails or if cancellation arrives mid-workflow, the runtime automatically compensates all completed steps in reverse (LIFO) order. This ensures that the system returns to a consistent state without manual intervention.
The visualization shows a saga executing forward steps (create account, send email, charge card). Trigger a failure or cancellation at any point to watch the compensation cascade: the last completed step compensates first, then the second-to-last, and so on. Each compensation runs through the cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem., so compensating actions themselves are subject to the same cleanup guarantees.
Asupersync's saga engine applies CALM analysisCALM AnalysisIdentifies which operations need coordinationConsistency As Logical Monotonicity — a principle that determines which operations can run coordination-free (monotone operations like Reserve and Send) and which require barriers (non-monotone operations like Commit and Release). Asupersync's Saga engine uses CALM to automatically batch consecutive monotone steps, inserting coordination barriers only where the math demands it. to optimize execution. Consecutive monotoneMonotone OperationOrder-independent operation that needs no coordinationAn operation whose result doesn't depend on the order it's executed relative to other monotone operations. Reserve, Send, Acquire, Renew, and CrdtMerge are all monotone in Asupersync's Saga engine. Because they're order-independent, they can be batched and executed without coordination barriers — a key performance optimization identified by CALM analysis. forward steps (those that can be reordered without affecting the outcome) execute concurrently without coordination barriersCoordination BarrierSync point required before non-monotone operationsA synchronization point that the Saga engine inserts before non-monotone operations (like Commit or Release). Monotone operations (like Reserve and Send) can execute without barriers because they're order-independent. CALM analysis determines exactly where barriers are needed, minimizing unnecessary synchronization.. Non-monotoneNon-Monotone OperationBarrier-requiring actionAn operation that checks a threshold or relies on negation (e.g., 'close this region only if zero tasks remain'). Because it must ensure no other thread is secretly adding a task while it checks, it fundamentally requires a heavy synchronization lock to execute safely. steps trigger a barrier. This classification-driven optimization means the saga engine extracts maximum concurrency while maintaining correctness, without requiring the developer to annotate parallelism manually.
Machine-checked execution rules in Lean 4 that mathematically prove the soundness of the runtime.
Small-step semanticsSmall-Step SemanticsFormal rules defining single computation stepsA mathematical framework where each computation step is defined by a transition rule of the form ⟨e, σ⟩ → ⟨e′, σ′⟩ (expression e in state σ steps to expression e′ in state σ′). Asupersync has 35 such rules covering spawning, cancellation, effect reservation, region closure, and more — each mechanized in Lean 4 for machine-checked correctness. define the runtime's behavior as a set of transition rulesTransition RuleOne formal rule in the small-step semanticsA single rule in the small-step operational semantics that defines how one kind of computation step works. For example, the SPAWN rule defines what happens when a task is created, and the CANCEL-PROPAGATE rule defines how cancellation flows through the Region tree. Asupersync has 35 such rules, all mechanized in Lean 4., each specifying how one computation step transforms the system state. The rule SPAWN, for example, says: given an expression spawn(e) and state σ, one step produces a permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. and registers the new task as Running in σ. By chaining rules, you can trace any execution from start to finish.
The visualization lets you step through these transitions one at a time. Each step highlights the active transition ruleTransition RuleOne formal rule in the small-step semanticsA single rule in the small-step operational semantics that defines how one kind of computation step works. For example, the SPAWN rule defines what happens when a task is created, and the CANCEL-PROPAGATE rule defines how cancellation flows through the Region tree. Asupersync has 35 such rules, all mechanized in Lean 4. and shows how the system state changes. Rules like CANCEL-PROPAGATE, DRAIN-TICK, and EFFECT-RESERVE cover the cancel protocol, fuelCancel FuelMonotonically decreasing termination counterA counter that strictly decreases with each step of cancel propagation, serving as a termination proof. Like a rocket with a finite fuel tank — the cancel signal can only travel so far before the fuel runs out, mathematically guaranteeing that cancellation always finishes. consumption, and two-phase effectsTwo-Phase EffectReserve/commit pattern preventing partial side-effectsA pattern where side effects are split into two stages: Reserve (stage the effect, making it reversible) and Commit (apply it permanently). If cancellation arrives between Reserve and Commit, the reservation is automatically rolled back. Think of it like a bank placing a hold on your account before transferring — if the transfer is cancelled, the hold is simply released. respectively.
These rules are not informal pseudocode. They are mechanically verified in Lean 4, meaning a theorem prover has checked that the rules are consistent, that safety invariants hold across all possible rule applications, and that the system cannot reach an undefined state. This level of rigor is what enables Asupersync's guarantee that cancel-correctnessCancel-CorrectGuaranteed safe shutdownIn standard Rust, dropping a future might instantly stop it mid-execution, leaving connections open or data half-written. 'Cancel-Correct' means the runtime enforces a graceful 3-phase shutdown: tasks are asked to stop, given time to clean up, and finally swept away safely. is a provable property, not a testing aspiration.
See how cancel, timed, and ready lanes prioritize work in the three-lane schedulerThree-Lane SchedulerCancel / Timed / Ready prioritiesA traffic control system for tasks. The 'Cancel Lane' is the ambulance: it always gets priority so cleanups happen instantly. The 'Timed Lane' is for scheduled events, and the 'Ready Lane' handles normal traffic..
The three-lane schedulerThree-Lane SchedulerCancel / Timed / Ready prioritiesA traffic control system for tasks. The 'Cancel Lane' is the ambulance: it always gets priority so cleanups happen instantly. The 'Timed Lane' is for scheduled events, and the 'Ready Lane' handles normal traffic. partitions all runnable tasks into three priority lanes. The Cancel lane has the highest priority and processes tasks that are in the drain phaseDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget. of the cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem.. The Timed lane handles tasks with pending timer expirations. The Ready lane processes all other runnable tasks at the lowest priority.
The visualization shows tasks flowing through the three lanes. Notice that cancel-phase tasks always execute before ready tasks, ensuring that the runtime clears cancellation backlogs before taking on new work. This priority ordering is critical: without it, a burst of new task spawns could starve the cancel lane, causing drain budgetsBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized. to expire and forcing hard termination of tasks that could have shut down gracefully.
The EXP3 schedulerEXP3 SchedulerAdaptive scheduling via exponential weightsAn online learning algorithm (Exponential-weight algorithm for Exploration and Exploitation) that adapts the scheduler's behavior based on observed cancellation patterns. The scheduler maintains 5 arms [4, 8, 16, 32, 64] representing cancel-streak thresholds. If certain thresholds consistently perform better, EXP3 shifts weight toward them with exploration rate γ=0.07, converging toward the optimal strategy over time. sits on top of this three-lane structure, tuning parameters like the maximum cancel streak length (how many consecutive cancel-lane tasks to process before giving the ready lane a turn). This layered design separates the correctness concern (cancel lane always has priority) from the performance concern (how to balance throughput and cancel responsiveness).
Follow a PermitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. through its lifecycle: Reserve → Hold → Consume.
The obligation systemObligation SystemFramework ensuring all resources are consumedThe formal framework that tracks every Permit and Lease in the system and guarantees they are all explicitly consumed before their owning scope exits. If a resource escapes without being handled, the compiler or runtime catches it — no resource can silently leak.uses linear typesLinear ObligationsMandatory return policy for resourcesA strict resource management system backed by Rust's type system. Think of it as a library book you must explicitly return. If a code path forgets to consume a resource (like a Permit or Lease), the program refuses to compile, preventing resource leaks at compile-time. to enforce resource lifecycle invariants at compile time. When a task acquires a resource (opens a file, takes a lock, reserves a side-effect), it receives a permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create.. That permit must be explicitly consumed (by closing the file, releasing the lock, or committing the effect) before the task can complete. Dropping a permit without consuming it is a compile-time error.
The visualization traces a permitPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. through its three lifecycle stages. Reserve creates the permit. Hold represents the window during which the task is using the resource. Consume releases the permit, satisfying the linear obligation. If the task is cancelled during the Hold phase, the drain phaseDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget. gives it time to consume outstanding permits.
LeasesLeaseScoped resource borrowTemporary ownership of a shared resource (like a database connection). Unlike standard Rust where drops are passive, if a Lease's owning Region is cancelled, the resource is actively and safely returned to its pool via the cancel protocol. are a time-bounded variant: a lease-permit automatically expires after a deadline, releasing the resource without explicit consumption. The ObligationLeak oracleObligationLeak OracleCatches leaked Permits and LeasesA test oracle that monitors for Permits or Leases that are dropped without being explicitly consumed (sent, completed, or aborted). When it catches one, it reports exactly which obligation was leaked and where it was created. in the Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. catches any permit that reaches task completion without being consumed, turning resource leaks from production incidents into test failures.
Change the seedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay. to see different task execution orders. Same code, different interleavings.
The Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. is a deterministic, single-threaded scheduler that replaces the production runtime during testing. It accepts a seedSeedDeterministic execution parameterAn integer that controls the Lab runtime's scheduling decisions. The exact same seed always produces the exact same task interleaving, enabling perfectly reproducible testing and bug replay. value and uses it to control every scheduling decision: task selection order, timer resolution, and tie-breaking. The same seed always produces the same execution, making concurrency bugs reproducible.
The visualization shows a set of tasks executing under different seeds. Change the seed and watch the execution order change. Tasks that ran first under one seed may run last under another. Despite this reordering, the test oraclesTest OracleRuntime correctness monitor for Lab testingA specialized runtime monitor that checks a specific class of correctness invariant during Lab testing. Each of the 17 built-in oracles watches for a different kind of bug — from resource leaks to protocol violations to budget overruns — acting as an automated auditor for your concurrent code. verify that all invariants hold across every interleaving.
The Lab is not a mock or a simulator. It runs your actual application code, including I/O (via capability-controlled stubs), against the real obligation systemObligation SystemFramework ensuring all resources are consumedThe formal framework that tracks every Permit and Lease in the system and guarantees they are all explicitly consumed before their owning scope exits. If a resource escapes without being handled, the compiler or runtime catches it — no resource can silently leak.and cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem.. The only difference from production is that scheduling is deterministic rather than OS-driven. Combined with systematic DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. exploration, the Lab can cover all distinct interleavings for small test scenarios, providing exhaustive verification rather than probabilistic confidence.
Toggle DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. to see how Asupersync prunes redundant commutations to make testing race conditions practical.
DPORDPORDynamic Partial Order ReductionA smart algorithm used by the Lab Runtime. Instead of testing every single possible way tasks could overlap (which would take billions of years), DPOR only tests the interleavings that actually interact with each other. It finds the needle in the concurrency haystack mathematically. (Dynamic Partial Order Reduction) is an algorithm that explores concurrent interleavings selectively, skipping orderings that are provably equivalent. Two interleavings are equivalent if they differ only in the ordering of independent (non-conflicting) operations. DPOR identifies conflict points at runtime, then backtracks only to explore orderings that produce genuinely different outcomes.
The visualization shows an interleaving search tree. Without DPOR, every possible ordering is explored, leading to factorial blowup. Toggle DPOR on to see branches pruned. The remaining branches are precisely those that contain at least one conflict-dependent reordering, meaning they could exhibit different behavior from previously explored paths.
Foata fingerprintsFoata FingerprintCanonical hash of concurrent execution tracesA technique for canonicalizing concurrent execution traces using Foata normal form. Two interleavings that differ only in the order of independent (non-conflicting) operations produce the same fingerprint. DPOR uses this to detect when two different schedules are actually equivalent — pruning redundant exploration without missing real bugs. further accelerate this pruning. When two unexplored branches produce the same Foata fingerprintFoata FingerprintCanonical hash of concurrent execution tracesA technique for canonicalizing concurrent execution traces using Foata normal form. Two interleavings that differ only in the order of independent (non-conflicting) operations produce the same fingerprint. DPOR uses this to detect when two different schedules are actually equivalent — pruning redundant exploration without missing real bugs., the LabLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. skips the duplicate. The combined effect of DPOR and fingerprint deduplication typically reduces the search space from factorial to polynomial in the number of tasks, making exhaustive interleaving testing practical for real-world async applications.
Side-by-side comparison of cancel behavior. Press 'Cancel Now' to see the difference.
This visualization provides a direct comparison between Tokio's cancellation model and Asupersync's three-phase cancel protocolThree-Phase Cancel ProtocolRequest → Drain → FinalizeA rigorous lifecycle for stopping tasks. Phase 1: Request (signal intent to stop). Phase 2: Drain (allow tasks to finish writing to disk, closing network sockets, etc.). Phase 3: Finalize (force-release all remaining resources). This prevents the 'silent drop' problem.. On the Tokio side, dropping a JoinHandle immediately destroys the future. Any in-flight work, open connections, and acquired locks are abandoned without cleanup. On the Asupersync side, cancellation initiates the structured Request → DrainDrain PhaseGraceful cleanup during cancellationThe second phase of the cancel protocol. After a cancel request, tasks enter Drain where they can finish in-flight work, flush buffers, close connections, and release resources within their Budget. → FinalizeFinalize PhaseFinal cleanup after drainingThe Finalize phase runs after Drain completes (or the Budget expires). It executes registered finalizers in LIFO order — similar to defer in Go but tightly integrated with the cancel protocol. sequence.
Press the Cancel button and observe both sides simultaneously. Tokio's side completes instantly because the future is simply dropped. Asupersync's side takes slightly longer (bounded by the drain budgetBudgetCountdown timer for task cleanupsWhen cancellation strikes, tasks aren't killed instantly—they're given a 'Budget' (a specific amount of time, like 5 seconds) to clean up their mess during the Drain phase. If they run out of time, they are forcefully Finalized.) but produces a clean shutdown: buffers are flushed, connections are closed, permitsPermitMust-use lifecycle tokenWhen you spawn a task, you get a Permit. You can't just throw it away; you must explicitly wait for the task, detach it, or cancel it. It's how Asupersync enforces that you are always paying attention to the tasks you create. are consumed, and finalizers run in LIFO order.
The tradeoff is explicit. Tokio's approach is faster in the cancellation path, but it makes cancel-correctnessCancel-CorrectGuaranteed safe shutdownIn standard Rust, dropping a future might instantly stop it mid-execution, leaving connections open or data half-written. 'Cancel-Correct' means the runtime enforces a graceful 3-phase shutdown: tasks are asked to stop, given time to clean up, and finally swept away safely. the developer's problem. Asupersync's approach adds bounded latency to cancellation, but it makes cleanup automatic and verifiable. The Lab runtimeLab RuntimeDeterministic testing sandboxA virtual time-machine for your code. The Lab Runtime replaces the chaotic real-world scheduler with a perfectly controlled one. By feeding it a 'Seed', developers can reproduce insanely complex concurrency bugs (race conditions) every single time. Bye-bye flaky tests. can prove that your application handles cancellation correctly across all interleavings; in Tokio, you must rely on manual code review and hope that every .await site handles its drop correctly.