The Sovereign SDK Book

Welcome to the Sovereign SDK Book, your comprehensive guide to the industry's most flexible toolkit for building high-performance, real-time rollups.

We built this SDK to give developers, from solo builders to large teams, the power to create onchain applications that were previously impossible.

With P99 of transaction executions under 10 milliseconds, the Sovereign SDK is fast enough to bring complex financial systems, like Central-Limit Orderbooks (CLOBs), fully on-chain.

Let's build the next Hyperliquid.

Why Build a Dedicated Rollup For Your Application?

For almost a decade, developers have been forced to build applications on shared, general-purpose blockchains. This model forces apps with vastly different needs to compete for the same limited blockspace. Building your application as a dedicated rollup gives you three strategic advantages:

  1. Dedicated Throughput: Your users will never have to compete with a viral NFT drop. A rollup gives your application its own dedicated lane, ensuring a consistently fast and affordable user experience.
  2. Capturing More Value: On shared blockchains, user fees primarily benefit the chain operators (i.e. L1 validators or general-purpose L2 sequencers). With a rollup, your application and its users can capture the vast majority of that value, creating a sustainable economic engine for your project.
  3. Full Control & Flexibility: Go beyond the limitations of a shared virtual machine. A rollup gives you full control over the execution environment, allowing you to define your own rules for how transactions are processed. With a rollup, you're in the driver's seat.

Why Choose the Sovereign SDK?

The Sovereign SDK is designed around four key principles to provide an unmatched developer and user experience:

  • Total Customization: While rollups promise flexibility, existing frameworks are overly restrictive. Sovereign SDK delivers on that promise with its modular Rust runtime, empowering you to customize as much or as little as needed. Easily add custom fee logic, integrate tailored authenticators, prioritize specific transaction types, or even swap out the authenticated state store—all without wrestling with legacy code.
  • Best-in-Class Performance: With 2-5ms soft confirmations and throughput exceeding 10,000 TPS, the Sovereign SDK is orders of magnitude faster than competing frameworks like Orbit, the OP Stack, or the Cosmos SDK.
  • A Developer-Friendly Experience: Write your logic in standard Rust, run cargo build, and get a complete full-node implementation with REST & WebSocket APIs, an indexer, auto-generated OpenAPI specs, and a sequencer with automatic failover out of the box. No boilerplate or deep blockchain expertise required.
  • Future-Proof Architecture: Never get locked into yesterday's tech stack. With the Sovereign SDK, you can switch data availability layers or zkVMs with just a few lines of code, ensuring your project remains agile for years to come.

How It Works

As a developer, you write your rollup's business logic in Rust, and the SDK handles the complexity of creating a complete, production-ready node implementation.

The magic happens in two stages: real-time execution and on-chain settlement.

  1. Real-Time Execution (Soft Confirmations): Users send transactions to a sequencer. The sequencer executes these transactions instantly (typically in under 2-5ms) and returns a "soft confirmation" back to the user. This provides a real-time user experience that feels like a traditional web application.

  2. On-Chain Settlement & Verification: Periodically, the sequencer batches thousands of these transactions and posts them to an underlying Data Availability (DA) layer like Celestia. From this point, the rest of the network—the full nodes—can read the ordered data and execute the transactions to independently verify the new state of the rollup.

Finally, specialized actors called provers (in zk-rollup mode) or attesters (in optimistic-rollup mode) generate cryptographic proofs or attestations that the state was computed correctly. These are posted back to the DA layer, allowing light clients and bridges to securely verify the rollup's state without having to re-execute every transaction.

This two-stage process gives you the best of both worlds: the instant, centralized execution needed for high-performance applications, combined with the censorship-resistance and trust-minimized verification of a traditional blockchain.

Ready to Build?

Now that you understand the power and flexibility of the Sovereign SDK, you're ready to get your hands dirty. In the next chapter, "Getting Started," we'll walk you through cloning a starter repository and running your first rollup in minutes.

Getting Started

Overview

This guide provides a starting point for building rollups with the Sovereign SDK.

It includes everything you need to create a rollup with customizable modules, REST API for state queries, TypeScript SDK for submitting transactions, WebSocket endpoints to subscribe to transactions and events, built-in token management, and much more.

Prerequisites

Before you begin, ensure you have the following installed:

  • Rust: 1.88.0 or later
    • Install via rustup: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    • The project will automatically install the correct version via rust-toolchain.toml
  • Node.js: 18.0 or later (for Typescript client)
  • Git: For cloning the repository

Running with Mock DA

1. Clone the starter repository and navigate to the rollup directory:

git clone https://github.com/Sovereign-Labs/sov-rollup-starter.git
cd sov-rollup-starter/crates/rollup/

2. (Optional) Clean the database for a fresh start:

make clean-db

3. Start the rollup node:

cargo run --bin node

Explore the REST API endpoints via Swagger UI

The rollup starter includes several built-in modules: Bank (for token management), Paymaster, Hyperlane, and more. You can query any state item in these modules:

open http://localhost:12346/swagger-ui/#/ 

Example: Query the Example Module's state value:

curl -X 'GET' \
  'http://0.0.0.0:12346/modules/example-module/state/value' \
  -H 'accept: application/json'

For now, you should just see null returned for the value state item, as the item hasn't been initialized:

{"data":{"value":null},"meta":{}}

Programmatic Interaction with Typescript

Set up the Typescript client:

cd ../../js # Navigate back up to the right directory
npm install 

The Typescript script demonstrates the complete transaction flow:

// 1. Initialize rollup client
const rollup = await createStandardRollup({ // defaults to http://localhost:12346, or pass url: "<custom-endpoint>"
  context: {
    defaultTxDetails: {
      max_priority_fee_bips: 0,
      max_fee: "100000000",
      gas_limit: null,
      chain_id: 4321, // Must match chain_id in constants.toml
    },
  },
});

// 2. Initialize signer
const privKey = "0d87c12ea7c12024b3f70a26d735874608f17c8bce2b48e6fe87389310191264";
let signer = new Secp256k1Signer(privKey, chainHash);

// 3. Create a transaction (call message)
let createTokenCall: RuntimeCall = {
  bank: {
    create_token: {
      admins: [],
      token_decimals: 8,
      supply_cap: 100000000000,
      token_name: "Example Token",
      initial_balance: 1000000000,
      mint_to_address: signerAddress, // derived from privKey above (can be any valid address)
    },
  },
};

// 4. Send transaction
let tx_response = await rollup.call(createTokenCall, { signer });

Run the script:

npm run start 

You should see a transaction soft-confirmation with events:

Tx sent successfully. Response:
{
  data: {
    id: '0xbfe14371219807b236c5c719ea85be63174fe0c673e8b229e4913e6f6273a5a0',
    events: [
      {
        type: 'event',
        number: 0,
        key: 'Bank/TokenCreated',
        value: {
          token_created: {
            token_name: 'Example Token',
            coins: {
              amount: '1000000000',
              token_id: 'token_10jrdwqkd0d4zf775np8x3tx29rk7j5m0nz9wj8t7czshylwhnsyqpgqtr9'
            },
            mint_to_address: { user: '0x9b08ce57a93751ae790698a2c9ebc76a78f23e25' },
            minter: { user: '0x9b08ce57a93751ae790698a2c9ebc76a78f23e25' },
            supply_cap: '100000000000',
            admins: []
          }
        },
        module: { type: 'moduleRef', name: 'Bank' },
        tx_hash: '0xbfe14371219807b236c5c719ea85be63174fe0c673e8b229e4913e6f6273a5a0'
      }
    ],
    receipt: { result: 'successful', data: { gas_used: [ 21119, 21119 ] } },
    status: 'submitted'
  },
  meta: {}
}

Subscribe to events from the sequencer:

You can also subscribe to events from the sequencer (you need to uncomment the subscription code blocks in the script):

// Subscribe to events
async function handleNewEvent(event: any): Promise<void> {
  console.log(event);
}
const subscription = rollup.subscribe("events", handleNewEvent);

// Unsubscribe
subscription.unsubscribe();

Interacting with different modules

To interact with different modules, simply change the call message. The top-level key corresponds to the module's variable name in the runtime, and the nested key is the CallMessage enum variant in snake_case:

// Example: Call the ExampleModule's SetValue method
let setValueCall: RuntimeCall = {
  example_module: {  // Must match Runtime field name of the module
    set_value: 10  
  },
};

This transaction would set the ExampleModule's state value to 10. Try setting the example file's call message to the expression above and re-running the script. Then verify that the ExampleModule's value changed using the curl command we showed earlier.

This time, the curl command should return:

{"data":{"value":10},"meta":{}}

What's Next?

You've now successfully launched a rollup, queried its state, and submitted a transaction. You've seen how the bank and example_module are just two components of a larger system.

To truly make this rollup your own, you'll want to build custom logic. In the next chapter, "Writing Your Application," we'll dive deep into the heart of the Sovereign SDK and teach you how to implement your very own module from scratch.

Writing Your Application

At its core, a rollup is a specialized blockchain that processes transactions from a data availability (DA) layer. The logic that determines how your rollup behaves is defined by two key components: the runtime and its modules.

The runtime is the orchestrator of your rollup. It receives serialized transactions from the DA layer, deserializes them, and routes them to the appropriate modules for execution. Think of it as the central nervous system that connects all your application logic together. The runtime defines which modules your rollup supports, how they interact with each other, and how the rollup's state is initialized at genesis.

Modules, on the other hand, contain the actual business logic of your application. Each module manages its own state and defines the operations (called "call messages") that users can perform. For example, you might have a token module for handling transfers, a governance module for voting, or a custom trading module for your specific use case. When a user wants to interact with your rollup, they send a call message targeting a specific module, and the runtime ensures it gets delivered and executed atomically.

The starter package already includes several production-ready modules like Bank (for token management), Sequencer Registry, Accounts, and Hyperlane (for cross-chain messaging). It also provides an Example Module that serves as a template you can modify.

Let's Begin

With this context in mind, we're ready to start building. This chapter will guide you through the complete journey of application development on the Sovereign SDK, from creating your first module to enabling user interactions and exploring advanced features.

Here is the path we'll take:

  1. Implementing a Module: First, we'll define your module's state and business logic—the heart of your application.
  2. Testing Your Module: We'll then write robust tests to ensure your logic is correct and secure.
  3. Integrating Your Module: Next, you'll learn how to add your finished module into a live rollup runtime.
  4. Wallets and Accounts: With your module integrated, we'll explore how users can create accounts and sign transactions to interact with it.
  5. Advanced Topics: From there, we'll dive into powerful features like hooks and custom APIs to extend your module's capabilities.
  6. Performance: You'll learn how to optimize your module for maximum throughput and efficiency.
  7. Prebuilt Modules: Finally, we'll review the rich ecosystem of existing modules you can leverage to accelerate your development.

Let's dive in!

Implementing a Module

A module is the basic unit of functionality in the Sovereign SDK. It's a self-contained piece of onchain logic that manages its own state and defines how users can interact with it.

The best way to learn how modules work is to build one. In this section, we'll create a simple but complete ValueSetter module from scratch. This module will allow a designated admin address to set a u32 value in the rollup's state. After the walkthrough, we'll dive deeper into each of the concepts introduced.

Think of this tutorial as your guide to the fundamental components of a module. Once you understand the concepts, we recommend starting your own module by copying the ExampleModule provided in the starter repository. It has all the necessary dependencies and file structure pre-configured for you.

A Step-by-Step Walkthrough: The ValueSetter Module

1. Defining the Module Struct

First, we define the module's structure and the state it will manage. This struct tells the SDK what data to store onchain.

use sov_modules_api::{Module, ModuleId, ModuleInfo, StateValue, Spec};

// This is the struct that will represent our module.
// It must derive `ModuleInfo` to be a valid module.
#[derive(Clone, ModuleInfo)]
pub struct ValueSetter<S: Spec> {
    /// The `#[id]` attribute is required and uniquely identifies the module instance.
    #[id]
    pub id: ModuleId,

    /// The `#[state]` attribute marks a field as a state variable.
    /// `StateValue` stores a single, typed value.
    #[state]
    pub value: StateValue<u32>,

    /// We'll also store the address of the admin who is allowed to change the value.
    /// S:Address is the address type of our rollup. More on `Spec` later.
    #[state]
    pub admin: StateValue<S::Address>,
}

2. Defining Types for the Module Trait

Next, we define the associated types required by the Module trait: its configuration, its callable methods, and its events.

// Continuing in the same file...
use schemars::JsonSchema;
use sov_modules_api::macros::{serialize, UniversalWallet};

// The configuration for our module at genesis. This will be deserialized from `genesis.json`.
#[derive(Clone, Debug, PartialEq, Eq)]
#[serialize(Borsh, Serde)]
#[serde(rename_all = "snake_case")]
pub struct ValueSetterConfig<S: Spec> {
    pub initial_value: u32,
    pub admin: S::Address,
}

// The actions a user can take. Our module only supports one action: setting the value.
#[derive(Clone, Debug, PartialEq, Eq, JsonSchema, UniversalWallet)]
#[serialize(Borsh, Serde)]
#[serde(rename_all = "snake_case")]
pub enum CallMessage {
    SetValue(u32),
}

// The event our module will emit after a successful action.
#[derive(Clone, Debug, PartialEq, Eq)]
#[serialize(Borsh, Serde)]
#[serde(rename_all = "snake_case")]
pub enum Event {
    ValueChanged(u32),
}

3. Implementing the Module Trait Logic

With our types defined, we can now implement the Module trait itself.

use anyhow::Result;
use sov_modules_api::{Context, GenesisState, TxState, EventEmitter};

// Now, we implement the `Module` trait.
impl<S: Spec> Module for ValueSetter<S> {
    type Spec = S;
    type Config = ValueSetterConfig<S>;
    type CallMessage = CallMessage;
    type Event = Event;

    // `genesis` is called once when the rollup is deployed to initialize the state.
    fn genesis(&mut self, _header: &<S::Da as sov_modules_api::DaSpec>::BlockHeader, config: &Self::Config, state: &mut impl GenesisState<S>) -> Result<()> {
        self.value.set(&config.initial_value, state)?;
        self.admin.set(&config.admin, state)?;
        Ok(())
    }

    // `call` is called when a user submits a transaction to the module.
    fn call(&mut self, msg: Self::CallMessage, context: &Context<S>, state: &mut impl TxState<S>) -> Result<()> {
        match msg {
            CallMessage::SetValue(new_value) => {
                self.set_value(new_value, context, state)?;

                Ok(())
            }
        }
    }
}

4. Writing the Business Logic

The final piece is to write the private set_value method containing our business logic.


impl<S: Spec> ValueSetter<S> {
    fn set_value(&mut self, new_value: u32, context: &Context<S>, state: &mut impl TxState<S>) -> Result<()> {
        let admin = self.admin.get_or_err(state)??;

        if admin != *context.sender() {
            return Err(anyhow::anyhow!("Only the admin can set the value.").into());
        }

        self.value.set(&new_value, state)?;
        self.emit_event(state, Event::ValueChanged(new_value));
        Ok(())
    }
}

With that, you've implemented a complete module! Now, let's break down the concepts we used in more detail.


Anatomy of a Module: A Deeper Look

Derived Traits: ModuleInfo and ModuleRestApi

You should always derive ModuleInfo on your module, since it does important work like laying out your state values in the database. If you forget to derive this trait, the SDK will throw a helpful error.

The ModuleRestApi trait is optional but highly recommended. It automatically generates RESTful API endpoints for the #[state] items in your module. Each item's endpoint will have the name {hostname}/modules/{module-name}/{field-name}/, with all items automatically converted to kebab-casing. For example, for the value field in our ValueSetter walkthrough, the SDK would generate an endpoint at the path /modules/value-setter/value.

Note that ModuleRestApi can't always generate endpoints for you. If it can't figure out how to generate an endpoint for a particular state value, it will simply skip it by default. If you want to override this behavior and throw a compiler error if endpoint generation fails, you can add the #[rest_api(include)] attribute.

The Spec Generic

Modules are generic over a Spec type, which provides access to core rollup types. By being generic over the Spec, you ensure that you can easily change things like the Address format used by your module later on without rewriting your logic.

Key types provided by Spec include:

  • S::Address: The address format used on the rollup.
  • S::Da::Address: The address format of the underlying Data Availability layer.
  • S::Da::BlockHeader: The block header type of the DA layer.

#[state], #[module], and #[id] fields

  • #[id]: Every module must have exactly one #[id] field. The ModuleInfo macro uses this to store the module's unique, auto-generated identifier.
  • #[module]: This attribute declares a dependency on another module. For example, if our ValueSetter needed to pay a fee, we could add #[module] pub bank: sov_bank::Bank<S>, allowing us to call self.bank.transfer(...) in our logic.
  • #[state]: This attribute marks a field as a state variable that will be stored in the database.

State Types In-Depth

The SDK provides several state types, each for a different use case:

  • StateValue<T>: Stores a single item of type T. We used this for value and admin variables.
  • StateMap<K, V>: Stores a key-value mapping.
  • StateVec<T>: Stores an ordered list of items, accessible by index.

The generic types can be any deterministic Rust data structure, anything from a simple u32 to a complex BTreeMap.

Accessory State: For each state type, there is a corresponding AccessoryState* variant (e.g., AccessoryStateMap). Accessory state is special: it can be read and written via the API, but it is write-only during a transaction. This makes it much cheaper to use for data that doesn't affect onchain logic, like indexing purchase histories for an off-chain frontend.

Codecs: By default, all state is serialized using Borsh. If you need to store a type from a third-party library that only supports serde, you can specify a different codec: StateValue<ThirdPartyType, BcsCodec>.

The Module Trait and its Methods

The Module trait is the core of your application's onchain logic. The implementation you wrote in the walkthrough satisfies this trait's requirements.

Let's look at a simplified version of the trait definition to understand its components:

trait Module {
    /// The configuration needed to initialize the module, deserialized from `genesis.json`.
    type Config;

    /// A module-defined enum representing the actions a user can take.
    type CallMessage: Debug + BorshSerialize + BorshDeserialize + Clone;

    /// A module-defined enum representing the events emitted by successful calls.
    type Event: Debug + BorshSerialize + BorshDeserialize + 'static + core::marker::Send;

    /// `genesis` is called once when the rollup is deployed to initialize state.
    ///
    /// The logic here must be deterministic, but since it only runs once,
    /// efficiency is not a primary concern.
    fn genesis(
        &mut self,
        genesis_rollup_header: &<<Self::Spec as Spec>::Da as DaSpec>::BlockHeader,
        config: &Self::Config,
        state: &mut impl GenesisState<Self::Spec>,
    ) -> Result<(), Error>;


    /// `call` accepts a `CallMessage` and executes it, changing the module's state.
    fn call(
        &mut self,
        message: Self::CallMessage,
        context: &Context<Self::Spec>,
        state: &mut impl TxState<Self::Spec>,
    ) -> Result<CallResponse, Error>;
}

genesis

The genesis function is called once when the rollup is deployed. It uses the module's Config struct (defined as the associated type Config) to initialize the state. This Config is deserialized from the genesis.json file.

call

The call function provides the transaction processing logic. It accepts a structured CallMessage from a user and a Context containing metadata like the sender's address. If your call function returns an error, the SDK automatically reverts all state changes and discards any events, ensuring that transactions are atomic.

You can define the CallMessage to be any type you wish, but an enum is usually best. Be sure to derive borsh and serde serialization, as well as schemars::JsonSchema and UniversalWallet. This ensures your CallMessage is portable across different languages and frontends.

A Note on Gas and Security: Just like Ethereum smart contracts, modules accept inputs that are pre-validated by the chain. Your call method does not need to worry about authenticating the transaction sender. The SDK also automatically meters gas for state accesses. You only need to manually charge gas (using Module::charge_gas(...)) if your module performs heavy computation outside of state reads/writes.

Events

Events are the primary way your module communicates with the outside world. They are structured data included in transaction receipts and are essential for:

  • Querying via REST API.
  • Streaming in real-time via WebSockets.
  • Building off-chain indexers and databases.

Important: Events are only emitted when transactions succeed. If a transaction reverts, all its events are discarded. This makes events perfect for reliably indexing onchain state.

Error Handling

Modules use anyhow::Result for error handling, providing rich context that helps both developers and users understand what went wrong.

When your call method returns an Err, the SDK automatically reverts all state changes made during the transaction. This ensures that your module's logic is atomic.

use anyhow::{Context, Result};

// Simplified code snippet from Bank module
fn transfer(&self, from: &S::Address, amount: u64, state: &mut impl TxState<S>) -> Result<()> {
    let balance = self.balances.get(from, state)?
        .with_context(|| format!("Failed to read balance for sender {}", from))?
        .unwrap_or(0);
    
    if balance < amount {
        return Err(anyhow::anyhow!("Insufficient balance: {} < {}", balance, amount));
    }
    // ...
    Ok(())
}

For more details on error handling patterns, see the Advanced Topics section.

Next Step: Ensuring Correctness

You now have a deep understanding of how to define, implement, and structure a module. With this foundation, you're ready to test your module.

In the next section, "Testing Your Module," we'll show you how to use the SDK's powerful testing framework to write comprehensive tests for your new module.

Testing Your Module

Testing is crucial for building reliable modules. The SDK provides a comprehensive testing framework that makes it easy to write thorough tests for your modules.

Test Infrastructure Overview

There are several key components for testing:

  • TestRunner: A stateful test harness that manages your runtime environment
  • TestUser: Test accounts with preconfigured balances
  • TransactionTestCase: A structured way to define test scenarios with assertions
  • Runtime Generation Macros: Automatically include all core modules

Setting Up Your Test Environment

1. Create Your Test Runtime

The generate_optimistic_runtime! macro automatically includes all core modules (Bank, Accounts, SequencerRegistry, etc.), so you only need to add your custom modules:

use sov_test_utils::runtime::optimistic::generate_optimistic_runtime;

// Your module's crate
use your_module::YourModule;

// Generate a runtime with core modules + your custom module
generate_optimistic_runtime!(
    TestRuntime <=  // Your test runtime name
    your_module: YourModule<S> // Your custom module
);

2. Define Your Genesis Configuration

Set up the initial state with test users:

use sov_test_utils::{HighLevelOptimisticGenesisConfig, TestRunner, TestUser};

pub struct TestData<S: Spec> {
    pub admin: TestUser<S>,
    pub user1: TestUser<S>,
    pub user2: TestUser<S>,
}

pub fn setup() -> (TestData<TestSpec>, TestRunner<TestRuntime, TestSpec>) {
    let genesis_config = HighLevelOptimisticGenesisConfig::generate()
        .add_accounts_with_default_balance(3);
    
    let mut users = genesis_config.additional_accounts().to_vec();
    let test_data = TestData {
        user2: users.pop().unwrap(),
        user1: users.pop().unwrap(),
        admin: users.pop().unwrap(),
    };
    
    let runner = TestRunner::new_with_genesis(/* ... */);
    
    (test_data, runner)
}

Writing Your First Test

Here's a complete example testing a simple module operation:

#[cfg(test)]
mod tests {
    use super::*;
    use sov_test_utils::{TestRunner, TransactionTestCase};
    
    #[test]
    fn test_module_operation() {
        // Setup runner and get test user 
        let (test_data, mut runner) = setup();
        let user = &test_data.user1;
        
        // Execute a transaction
        runner.execute_transaction(TransactionTestCase {
            input: user.create_plain_message::<TestRuntime, YourModule>(
                CallMessage::SetValue { value: 42 }
            ),
            assert: Box::new(|result, state| {
                // Verify the transaction succeeded
                assert!(result.tx_receipt.is_successful());
                
                // Query and verify state
                let current_value = YourModule::default()
                    .get_value(state)
                    .unwrap_infallible()  // State access can't fail in tests
                    .unwrap();            // Handle the Option
                assert_eq!(current_value, 42);
            }),
        });
    }
}

Running Your Tests

Execute your tests from your module's root directory using standard Rust commands:

# Navigate to your module directory 
cd your-module/

# Run all tests in your module
cargo test

# Run specific test
cargo test test_module_operation

# Run with output for debugging
cargo test -- --nocapture

Testing Patterns

1. Error Scenario Testing

Test that your module handles errors correctly:

#[test]
fn test_insufficient_balance() {
    let (test_data, mut runner) = setup();
    let sender = &test_data.user1;
    let receiver = &test_data.user2;
    
    runner.execute_transaction(TransactionTestCase {
        input: sender.create_plain_message::<TestRuntime, Bank>(
            CallMessage::Transfer {
                to: receiver.address(),
                coins: Coins { 
                    amount: 999_999_999_999, // More than the sender has 
                    token_id: config_gas_token_id() 
                },
            }
        ),
        assert: Box::new(|result, _state| {
            // Verify the transaction reverted
            assert!(result.tx_receipt.is_reverted());
            
            // Check the specific error message
            if let TxEffect::Reverted(contents) = &result.tx_receipt.tx_effect {
                assert!(contents.reason.to_string().contains("Insufficient balance"));
            }
        }),
    });
}

2. Event Testing

Verify that your module emits the correct events. Note that the event enum name (e.g., TestRuntimeEvent) is automatically generated based on your runtime name.

#[test]
fn test_event_emission() {
    let (test_data, mut runner) = setup();
    let user = &test_data.user1;
    
    runner.execute_transaction(TransactionTestCase {
        input: user.create_plain_message::<TestRuntime, YourModule>(
            CallMessage::CreateItem { name: "Test".into() }
        ),
        assert: Box::new(move |result, _state| {
            assert!(result.tx_receipt.is_successful());
            assert_eq!(result.events.len(), 1);
            
            assert_eq!(
                result.events[0],
                TestRuntimeEvent::YourModule(your_module::Event::ItemCreated {
                    creator: user.address(),
                    name: "Test".into()
                })
            );
        }),
    });
}

3. Time-Based Testing

Test operations that depend on blockchain progression by advancing slots:

#[test]
fn test_time_delayed_operation() {
    let (users, mut runner) = setup();

    // 1. Initiate a time-locked operation (e.g., a vesting schedule)

    // 2. Advance blockchain time
    runner.advance_slots(100); // Advance 100 slots

    // 3. Now, the second part of the operation should succeed
    runner.execute_transaction(/* ... complete the operation ... */);
}

4. Standalone State Queries

While you can query state within a transaction's assert block, you can also query the latest visible state at any point using runner.query_visible_state. This is useful for verifying the initial genesis state or checking state after non-transaction events like advancing slots. This can be useful if you especially have custom hooks:

#[test]
fn test_state_queries() {
    let (test_data, mut runner) = setup();
    let admin = &test_data.admin;

    // Query the initial genesis state before any transactions
    runner.query_visible_state(|state| {
        // Query a value from your module
        let item_count = YourModule::<S>::default()
            .get_item_count(state)
            .unwrap_infallible()
            .unwrap();
        assert_eq!(item_count, 0);

    });

    // 2. Execute a transaction that changes state
    runner.execute_transaction(TransactionTestCase {
        input: admin.create_plain_message::<TestRuntime, YourModule>(
            CallMessage::CreateItem { name: "Test".into() }
        ),
        assert: |result, _| assert!(result.tx_receipt.is_successful()),
    });

    // Query again to see the new state
    runner.query_visible_state(|state| {
        let item_count = YourModule::<S>::default()
            .get_item_count(state)
            .unwrap_infallible()
            .unwrap();
        assert_eq!(item_count, 1);
    });
}

5. Custom Module Genesis Configuration

If your module requires initialization parameters in genesis (like an admin address or initial values), you'll need to provide a custom configuration:

use sov_test_utils::{GenesisConfig};

fn setup_with_config() -> (TestUser<TestSpec>, TestRunner<TestRuntime, TestSpec>) {
    let genesis_config = HighLevelOptimisticGenesisConfig::generate()
        .add_accounts_with_default_balance(1);
    
    // Get the admin user
    let admin = genesis_config
        .additional_accounts()
        .first()
        .unwrap()
        .clone();
    
    // Create genesis with your module's configuration
    let genesis = GenesisConfig::from_minimal_config(
        genesis_config.into(),
        YourModuleConfig {
            admin: admin.address(),
            initial_value: 1000,
            // Other module-specific parameters
        },
    );
    
    let runner = TestRunner::new_with_genesis(
        genesis.into_genesis_params(),
        TestRuntime::default()
    );
    
    (admin, runner)
}

Additional Resources

For more advanced testing scenarios, the sov-test-utils crate is your primary resource. It contains all the testing components covered in this guide and much more.

We highly recommend exploring the documentation for the TestRunner struct, which provides methods for more complex scenarios, including:

  • Executing and asserting on batches of transactions.
  • Querying historical state at specific block heights.
  • Customizing gas and fee configurations.
  • Running an integrated REST API server for off-chain testing.

The sov-test-utils crate provides a comprehensive toolkit for testing every aspect of your module's behavior.

Ready for Primetime

With a thoroughly tested module, you can be confident in your logic's correctness and robustness. It's now time to bring your module to life by integrating it into a live rollup runtime.

In the next section, "Integrating Your Module," we'll guide you through adding your module to the Runtime struct, configuring its genesis state, and making it a live component of your application.

Adding Your Module to Your Runtime

Once you've built and tested your module, the final step is to integrate it into your rollup runtime. This section will walk you through the process of adding your module to a new rollup project based on our rollup starter template.

Step 1: Add Your Module as a Dependency

First, add your module to the workspace dependencies in the root Cargo.toml:

[workspace.dependencies]
# ... existing dependencies ...

# Add your module here
your-module = { path = "../path/to/your-module" }
# Or if published:
# your-module = { version = "0.1.0" }
# Or if the module is available on Github:
# your-module = { git = "https://github.com/your-github/your-module", rev = "dfd0624c32f5fb363c2190e9d911605663f7d693" }

Then add it to your STF crate's dependencies in crates/stf/Cargo.toml (where your Runtime is typically located in):

[dependencies]
# ... existing dependencies ...

your-module = { workspace = true }

[features]
default = []
native = [
    # ... existing native features ...
    "your-module/native",
]

Step 2: Add Your Module to the Runtime Struct

The central piece of your rollup's logic is the Runtime struct, usually found in crates/stf/src/runtime.rs. This struct lists all the modules that compose your rollup. To integrate your module, simply add it as a new field to this struct.

The Runtime struct uses several derive macros (#[derive(Genesis, DispatchCall, ...)]) that automatically generate the boilerplate code for state initialization, transaction dispatching, and message encoding.

use your_module::YourModule;

#[derive(Genesis, Hooks, DispatchCall, Event, MessageCodec, RuntimeRestApi)]
pub struct Runtime<S: Spec> {
    /// The bank module is responsible for managing tokens
    pub bank: sov_bank::Bank<S>,
    
    /// The accounts module manages user accounts and addresses
    pub accounts: sov_accounts::Accounts<S>,
    
    // ... other modules ...
    
    /// Your custom module
    pub your_module: YourModule<S>,
}

Step 3: Configure Genesis State

When your rollup is first launched, it populates its initial state from a genesis.json file. You need to tell the rollup how to initialize your module by adding a corresponding entry to this file.

Understanding genesis.json

The genesis.json is a simple key-value store where each key is the snake-case name of a module in your Runtime struct, and the value is the initial configuration for that module.

When the rollup starts, it deserializes this JSON into your module's Config struct (which you define in your module) and passes it to your module's genesis() method.

You will find this file in the root of your rollup project: your-rollup/{DA_LAYER_NAME}/genesis.json.

Adding Your Module's Configuration

There are two cases:

1. Your Module Requires No Initial Configuration:

If your module's Config is an empty struct (e.g., pub struct MyModuleConfig {}) or can be created with Default::default(), you just need to add its name to genesis.json with an empty JSON object {}.

// In your-rollup/genesis.json
{
  "bank": { ... },
  "sequencer_registry": { ... },
  "accounts": { ... },
  "my_awesome_module": {}
}

2. Your Module Requires Initial Configuration:

If your module needs initial parameters (like an admin address or an initial value), you must provide them in the JSON object. The JSON fields must exactly match the fields of your module's Config struct.

For example, if your module has this Config struct:

// In modules/my-awesome-module/src/lib.rs
#[derive(serde::Deserialize, serde::Serialize)]
pub struct MyAwesomeModuleConfig {
    pub admin_address: S::Address,
    pub initial_counter: u64,
}

Your genesis.json entry would look like this:

// In your-rollup/genesis.json
{
  // ... other modules
  "my_awesome_module": {
    "admin_address": "0x633dD354F65261d7a64E10459508F8713a537149",
    "initial_counter": 100
  }
}

Step 4: Configure Rollup Constant

The constants.toml file in your rollup's root directory allows you to configure chain-level parameters that don't change often. You should update this file to reflect your rollup's identity.

# Change these to make your chain unique
CHAIN_ID = 12345  # Your unique chain ID
CHAIN_NAME = "my-awesome-rollup"

# Gas configuration
GAS_TOKEN_NAME = "GAS"

# Other compile time parameters...

These values are compiled into your rollup binary.

Step 5: Build and Run

With everything configured, you can run your rollup with your module:

# Run the node
cargo run --bin node

Your rollup is now operational! You can:

  • Send transactions to your module
  • Query its state via the REST API
  • See events in transaction receipts

Troubleshooting

Common Issues

Module not found in genesis

  • Ensure the module name in genesis.json matches the field name in your Runtime struct

Serialization errors

  • Verify your genesis configuration matches your module's Config type
  • Check that all addresses use the correct format (0x-prefixed hex)

Build errors

  • Ensure all feature flags are properly configured
  • Check that your module exports all required types

Your Module is Live!

Congratulations! Your module is now a fully integrated part of a running rollup. You have successfully navigated the complete development lifecycle, from implementation and testing to deployment on your local machine.

You've built the core logic, but now the crucial question is: how do users actually interact with it? How do they create accounts, manage keys, and sign transactions to call your new module's methods?

The next section, "Wallets and Accounts," will bridge this gap. We'll explore how to leverage the SDK's Ethereum-compatible account system and use client-side tooling to sign and submit transactions to your rollup, bringing your application to life for end-users.

Wallets and Accounts

Now that you've built, tested, and integrated your module, the final step is enabling users to interact with it. This section covers how accounts, wallets, and transaction signing work in the Sovereign SDK.

The core design principle is Ethereum wallet compatibility. Sovereign SDK rollups use standard Ethereum addresses and signatures (Secp256k1), which unlocks the vast Ethereum wallet tooling. However, there are important nuances to understand.

The Sovereign SDK Transaction Type

A critical distinction to grasp is that while addresses and signatures are Ethereum-compatible, the transaction format itself is unique to your rollup. A Sovereign SDK rollup does not natively accept a raw, RLP-encoded Ethereum transaction.

Instead, your rollup's Runtime defines a custom RuntimeCall enum, which represents all possible actions a user can take. When a user sends a transaction, they are essentially sending a serialized RuntimeCall message that has been signed with their Ethereum-compatible key.

Signing Transactions Today: The web3.js SDK & Privy

The primary way for users and developers to sign and submit these custom transactions today is through the Sovereign web3.js client library. This library provides two main signer implementations:

1. Secp256k1Signer (For Developers)

This is a straightforward signer for programmatic use, where you have direct access to a raw private key. It's perfect for scripting, backend services, or testing.

import { Secp256k1Signer } from "@sovereign-labs/signers";

// Initialize with a raw private key
const privKey = "0d87c12ea7c12024b3f70a26d735874608f17c8bce2b48e6fe87389310191264";
const signer = new Secp256k1Signer(privKey);

// Use the signer to send a transaction
await rollup.call(myCallMessage, { signer });

2. PrivySigner (For User-Facing Applications)

For most applications, asking users for a private key is not feasible or secure. This is where Privy comes in. Privy is a powerful wallet-as-a-service provider that allows users to create a non-custodial wallet using familiar Web2 logins like email or social accounts. They can also connect their existing wallets (like MetaMask or Phantom).

The sov-rollup-starter repository includes a full example of integrating the PrivySigner, making it the most realistic and user-friendly way to onboard users to your rollup today. It handles all the complexity of wallet creation and signing, allowing users to interact with your application seamlessly.

The Future: Supporting All Ethereum Wallets by Leveraging EIP-712

While Privy provides an excellent experience, it is crucial to meet users where they're at and enable support for all existing Ethereum wallets (including hardware wallets). This will be enabled by implementing a new EIP-712 Authenticator for the Sovereign SDK runtime (which we hope to complete by August 24, 2025).

EIP-712 is an Ethereum standard for signing typed, structured data. Instead of asking the user to sign a cryptic hash, EIP-712 allows wallets to display the transaction data in a human-readable, key-value format. This dramatically improves security and user experience, as users can see exactly what they are approving.

For example, a signature request using EIP-712 would look like this in MetaMask:

A message signing request from Hyperliquid

This upcoming feature, inspired by the pioneering work of Hyperliquid, will allow developers to support all Ethereum wallets.

Next Steps: Advanced Features

You now have a complete picture of how to build a module and enable users to interact with it. From here, you can dive into the "Advanced Topics" to learn about hooks, custom APIs, and other powerful features that will allow you to build truly sophisticated onchain applications.

Advanced Topics

This section covers advanced module development features that go beyond basic functionality. While the core module implementation handles state management and transaction processing, you may need these additional capabilities for production use cases.

All features in this section are optional. Start with the basic module implementation and add these capabilities as your requirements grow.

Hooks

In addition to call, modules may optionally implement Hooks. Hooks can run at the begining and end of every rollup block and every transaction. BlockHooks are great for taking actions that need to happen before or after any transaction executes in a block - but be careful, no one pays for the computation done by BlockHooks, so doing any heavy computation can make your rollup vulnerable to DOS attacks.

TxHooks are useful for checking invariants, or to allow your module to monitor actions being taken by other modules. Unlike BlockHooks, TxHooks are paid for by the user who sent each transaction.

The FinalizeHook is great for doing indexing. It can only modify AccessoryState, which makes it cheap to run but means that the results will only be visible via the API.

Using the hooks is somewhat unusual - most applications only need to modify their state in response to user actions - but it's a powerful tool in some cases. See the documentation on BlockHooks and TxHooks and FinalizeHook more details.

Error Handling

When to Panic vs Return Errors

Panic when:

  • You encounter a bug that indicates broken invariants
  • The error is unrecoverable and continuing would compromise state integrity

When you panic, the rollup will shut down. This is correct for bugs that could corrupt your state.

Return errors when:

  • User input is invalid
  • Business logic conditions aren't met (insufficient balance, unauthorized access, etc.)
  • Any expected failure condition

Transaction errors automatically revert all state changes.

Writing Error Messages

Your error messages serve both end users and developers. Use anyhow with context to provide meaningful errors:

use anyhow::{Context, Result};

fn transfer(&self, from: &S::Address, to: &S::Address, token_id: &TokenId, amount: u64, state: &mut impl TxState<S>) -> Result<()> {
    let balance = self.balances
        .get(&(from, token_id), state)
        .context("Failed to read sender balance")?
        .unwrap_or(0);
    
    if balance < amount {
        // User-facing error message
        return Err(anyhow::anyhow!("Insufficient balance: {} < {}", balance, amount));
    }
    
    let new_balance = balance - amount;
    
    // Add context for debugging when operations fail
    self.balances
        .set(&(from, token_id), &new_balance, state)
        .with_context(|| format!("Failed to update balance for {} token {}", from, token_id))?;
    
    // ... rest of transfer logic
    Ok(())
}

Transaction reverts are normal and expected - log them at debug! level if needed for debugging, not as warnings or errors.

Native-Only Code

Some functionality should only run natively on the full nodes (and sequencer), not in the zkVM during proof generation. This is a critical concept for separating verifiable on-chain logic from off-chain operational tooling.

Any code that is not part of the core state transition must be gated with #[cfg(feature = "native")]:

#[cfg(feature = "native")]
impl<S: Spec> MyModule<S> {
    // This code only compiles natively, not in zkVM
    pub fn debug_state(&self, state: &impl StateAccessor<S>) {
        // ...
    }
}

This ensures that your zk-proofs remain small and your onchain logic remains deterministic. Common use cases for native-only code include:

  • Custom REST APIs and RPC methods
  • Metrics and logging integration
  • Debugging tools
  • Integrations with external services

Transaction Prioritization and MEV Mitigation

For latency-sensitive financial applications, managing transaction order and mitigating Maximum Extractable Value (MEV) is critical. The Sovereign SDK provides a powerful, sequencer-level tool to combat toxic orderflow by allowing developers to introduce fine-grained processing delays for specific transaction types.

This is a powerful technique for applications like on-chain Central Limit Orderbooks (CLOBs). By introducing a small, artificial delay on aggressive "take" orders, a rollup can implicitly prioritize "cancel" orders. This gives market makers a crucial window to pull stale quotes before they can be exploited by low-latency arbitrageurs, leading to fairer and more liquid markets.

This functionality is implemented via the get_transaction_delay_ms method on your Runtime struct. Because this is a sequencer-level scheduling feature and not part of the core state transition logic, it must be gated behind the native feature flag.

The method receives a decoded CallMessage and returns the number of milliseconds the sequencer should wait before processing it. A return value of 0 means the transaction should be processed immediately.

Example: Prioritizing Cancels in a CLOB

// In your-rollup/stf/src/runtime.rs

// In the `impl<S> sov_modules_stf_blueprint::Runtime<S> for Runtime<S>` block:

#[cfg(feature = "native")]
fn get_transaction_delay_ms(&self, call: &Self::Decodable) -> u64 {
    // `Self::Decodable` is the auto-generated `RuntimeCall` enum for your runtime.
    // It has one variant for each module in your `Runtime` struct.
    match call {
        // Introduce a small 10ms delay on all "take" orders to give
        // market makers time to cancel stale orders.
        // (Here, `Clob` is the variant corresponding to the `clob` field in your `Runtime` struct,
        // and `PlaceTakeOrder` is the variant of the `clob` module's `CallMessage` enum.)
        Self::Decodable::Clob(clob::CallMessage::PlaceTakeOrder { .. }) => 50,

        // All other CLOB operations, like placing or cancelling "make" orders,
        // are processed immediately with zero delay.
        Self::Decodable::Clob(..) => 0,
        
        // All other transactions in other modules are also processed immediately.
        _ => 0,
    }
}

This feature gives you precise control over your sequencer's processing queue, enabling sophisticated MEV mitigation strategies without altering your core onchain business logic.

Adding Custom REST APIs

You can easily add custom APIs to your module by implementing the HasCustomRestApi trait. This trait has two methods - one which actually implements the routes, and an optional one which provides an OpenApi spec. You can see a good example in the Bank module:

#![cfg(feature = "native")]
impl<S: Spec> HasCustomRestApi for Bank<S> {
    type Spec = S;

    fn custom_rest_api(&self, state: ApiState<S>) -> axum::Router<()> {
        axum::Router::new()
            .route(
                "/tokens/:tokenId/total-supply",
                get(Self::route_total_supply),
            )
            .with_state(state.with(self.clone()))
    }

    fn custom_openapi_spec(&self) -> Option<OpenApi> {
        let mut open_api: OpenApi =
            serde_yaml::from_str(include_str!("../openapi-v3.yaml")).expect("Invalid OpenAPI spec");
        for path_item in open_api.paths.paths.values_mut() {
            path_item.extensions = None;
        }
        Some(open_api)
    }
}

async fn route_balance(
    state: ApiState<S, Self>,
    mut accessor: ApiStateAccessor<S>,
    Path((token_id, user_address)): Path<(TokenId, S::Address)>,
) -> ApiResult<Coins> {
    let amount = state
        .get_balance_of(&user_address, token_id, &mut accessor)
        .unwrap_infallible() // State access can't fail because no one has to pay for gas.
        .ok_or_else(|| errors::not_found_404("Balance", user_address))?;

    Ok(Coins { amount, token_id }.into())
}

REST API methods get access to an ApiStateAccessor. This special struct gives you access to both normal and Accessory state values. You can freely read and write to state during your API calls, which makes it easy to reuse code from the rest of your module. However, it's important to remember API calls do not durably mutate state. Any state changes are thrown away at the end of the request.

If you implement a custom REST API, your new routes will be automatically nested under your module's router. So, in the following example, the tokens/:tokenId/total-supply function can be found at /modules/bank/tokens/:tokenId/total-supply. Similarly, your OpenApi spec will get combined with the auto-generated one automatically.

Note that for for custom REST APIs, you'll need to manually write an OpenApi specification if you want client support.

Legacy RPC Support

In addition to custom RESTful APIs, the Sovereign SDK lets you create JSON-RPC methods. This is useful to provide API compatibility with existing chains like Ethereum and Solana, but we recommend using REST APIs whenever compatibility isn't a concern.

To implement RPC methods, simply annotate an impl block on your module with the #[rpc_gen(client, server)] macro, and then write methods which accept an ApiStateAcessor as their final argument and return an RpcResult. You can see some examples in the Evm module.

#![cfg(feature = "native")]
#[rpc_gen(client, server)]
impl<S: Spec> Evm<S> {
    /// Handler for `net_version`
    #[rpc_method(name = "eth_getStorageAt")]
    pub fn get_storage_at(
        &self,
        address: Address,
        index: U256,
        state: &mut ApiStateAccessor<S>,
    ) -> RpcResult<U256> {
        let storage_slot = self
            .account_storage
            .get(&(&address, &index), state)
            .unwrap_infallible()
            .unwrap_or_default();
        Ok(storage_slot)
    }
}

Mastering Your Module

By leveraging Hooks, robust error handling, and custom APIs, you can build sophisticated, production-grade modules that are both powerful and easy to operate.

With a deep understanding of module implementation, you may next want to optimize your rollup's performance. The next section on "Understanding Performance" will dive into state access patterns and cryptographic considerations that can significantly impact your application's throughput.

Understanding Performance

State Access

The vast majority of the cost of executing a Sovereign SDK transaction comes from state accesses. When calling item.set(&value), the SDK serializes your value and stores the bytes in cache. When time you access a value using item.get(), the SDK deserializes a fresh copy of your value from the bytes held in cache, falling back to disk if necessary.

Each time you access a value that's not in cache, the SDK has to generate a merkle proof of the value, which it will consume when it's time to generate a zero-knowledge proof. Similarly, each time you write a new value, the SDK has to generate a merkle update proof. This makes reading/writing to a hot value at least an order of magnitude cheaper than writing to a cold one (where hot means that the value has already been accessed in the current block.) So, if you have state items that are frequently accessed together, it's a good idea to bundle them into a single StateValue or store them under the same key in a StateMap.

As a rule of thumb, for each 10% locality, you should be willing to add an extra 200 bytes to your StateValue. In other words, if two values are accessed together 30% of the time, you should put them together unless either of the state items is bigger than 600 bytes. (Exception: If two items are always accessed together, you should always group them together - no questions asked).

Cryptography

The other common source of performance woes is heavy-duty cryptography. If you need to do any cryptographic operations, check whether the Spec trait provides a method in its Spec::CryptoSpec that already does what you want. If it does, use that - the SDK will ensure you get an implementation which is optimized for the SDK's peculiar requirements. If you need access to more exotic cryptography, you can use pretty much any existing Rust library - but be aware that the performance penalty might be severe when it comes time to prove your module's execution, which could limit your total throughput. If you do need advanced cryptography, you may need to pick an implementation that's suited to a particular ZKVM (like SP1 or Risc0) and only use that vm with your module.

Building for Scale

By keeping these performance principles in mind, bundling hot state and using optimized cryptography, you can design your modules to be highly efficient, ensuring your rollup can scale to meet user demand.

While building custom logic is powerful, you don't always have to start from scratch. The Sovereign SDK comes with a rich set of "Prebuilt Modules" for common tasks like token management, bridging, and sequencer orchestration. The next section provides an overview of these modules, which you can leverage to accelerate your development.

Prebuilt Modules

Here's a comprehensive list of all existing modules:

User Facing

sov-bank - Token management module for creating, transferring, and burning tokens with unique addresses and names

sov-chain-state - Provides access to blockchain state including block height, hash, and general chain information

sov-paymaster - Enables third-party gas sponsorship with per-sequencer payer configuration, so that users don't need any gas tokens to start transacting on your rollup

sov-evm - EVM compatibility layer that processes RLP-encoded Ethereum transactions and provides standard Ethereum endpoints

sov-svm - SVM compatibility layer that processes Solana transactions and provides standard Solana endpoints (built & maintained by the Termina team)

Bridging

sov-hyperlane-mailbox - All five of these modules are part of the Hyperlane (bridging) integration. They enable any Sovereign SDK rollup to bridge messages and tokens from any EVM, SVM or Cosmos SDK chain.

  • Mailbox: Sends and receives cross-chain messages
  • MerkleTreeHook: Computes merkle root of sent messages
  • InterchainGasPaymaster: Handles cross-chain fee payments to relayers
  • Warp: Enables interchain token transfers
    • Supports validator announcements and multisig ISMs

Core

sov-accounts - Account management system that automatically creates addresses for first-time senders and manages credential-to-address mappings

sov-uniqueness - Transaction deduplication logic using either nonce-based (Ethereum-style) or generation-based methods (for low-latency applications)

sov-blob-storage - Deferred blob storage system implementing the BlobSelector rollup capability (which enables soft-confirmations without losing censorship resistance)

Incentive & Economics

sov-attester-incentives - Complete attestation/challenge verification workflow with bonding and rewards for optimistic rollups

sov-prover-incentives - Prover registration, proof validation, slashing, and rewards distribution

sov-sequencer-registry - Manages sequencer registration, slashing, and rewards

sov-revenue-share - Manages automated revenue sharing

Development & Testing

sov-synthetic-load - Load testing module exposing heavy transaction types for performance testing

sov-value-setter - Simple testing module for storing and retrieving a single value

module-template - Starter template demonstrating proper module structure with state-changing methods and queries

Standing on the Shoulders of Giants

Leveraging these prebuilt modules can save you significant development time and effort, allowing you to focus on your application's unique business logic. You've now completed the "Writing Your Application" chapter and have a comprehensive understanding of how to build, test, and deploy powerful rollups.

The next major part of our book, "Instrumenting Your Rollup," will shift focus from development to operations, teaching you how to monitor your running rollup using metrics and structured logging.

Instrumenting Your Rollup

Proper instrumentation is essential for monitoring, debugging, and optimizing your rollup in production. The Sovereign SDK provides comprehensive observability tools that help you understand your rollup's behavior and performance.

This section covers:

  • Metrics - Track performance indicators and business metrics
  • Logging - Debug and monitor your rollup's execution

[TODO: Insert section on spinning up Grafana dashboards to monitor your rollup seamlessly]

Important: Native-Only Features

All instrumentation code must be gated with #[cfg(feature = "native")] to ensure it only runs on full nodes, not in the zkVM during proof generation. This allows you to instrument generously without affecting proof generation performance.

Metrics

The SDK includes a custom metrics system called sov-metrics designed specifically for rollup monitoring. It uses the Telegraf line protocol format and integrates with Telegraf through socket listeners for efficient data collection. Metrics are automatically timestamped and sent to your configured Telegraf endpoint, which typically forwards them to InfluxDB for storage and Grafana for visualization. Metrics can only be tracked in native mode (not in zkVM).

Important: Metrics are emitted immediately when tracked and are NOT rolled back if a transaction reverts. This means failed transactions will still have their metrics recorded, which can be useful for debugging and monitoring error rates.

Basic Example

#[cfg(feature = "native")]
use sov_metrics::{track_metrics, start_timer, save_elapsed};

impl<S: Spec> MyModule<S> {
    fn process_batch(&self, items: Vec<Item>) -> Result<()> {
        // Time the operation using the provided macros
        start_timer!(batch_timer);
            
        for item in items {
            self.process_item(item)?;
        }
            
        save_elapsed!(elapsed SINCE batch_timer);

        #[cfg(feature = "native")] 
        {
            // Track batch size
            track_metrics(|tracker| {
                tracker.submit_inline(
                    "mymodule_batch_size",
                    format!("items={}", items.len()),
                );
            });
            
            // Track processing time
            track_metrics(|tracker| {
                tracker.submit_inline(
                    "mymodule_batch_processing_time",
                    format!("duration_ms={}", elapsed.as_millis()),
                );
            });
        }
        
        Ok(())
    }
}

Tracking Custom Metrics

To track custom metrics, implement the Metric trait:

// Implement your custom metric in a file of your own choosing...
#![cfg(feature = "native")]
use sov_metrics::Metric;
use sov_metrics::{track_metrics, start_timer, save_elapsed};
use std::io::Write;

#[derive(Debug)]
struct TransferMetric {
    from: String,
    to: String,
    token_id: TokenId,
    amount: u64,
    duration_ms: u64,
}

impl Metric for TransferMetric {
    fn measurement_name(&self) -> &'static str {
        "mymodule_transfers"
    }
    
    fn serialize_for_telegraf(&self, buffer: &mut Vec<u8>) -> std::io::Result<()> {
        // Format: measurement_name,tag1=value1,tag2=value2 field1=value1,field2=value2
        write!(
            buffer,
            "{},from={},to={},token_id={} amount={},duration_ms={}",
            self.measurement_name(),
            self.from,
            self.to,
            self.token_id,
            self.amount,
            self.duration_ms
        )
    }
}

// In your module file...
#[cfg(feature = "native")]
use sov_metrics::{track_metrics, start_timer, save_elapsed};
#[cfg(feature = "native")]
use my_custom_metrics::TransferMetric;

// Adapted from Bank module 
impl<S: Spec> Bank<S> {
    fn transfer(&self, from: &S::Address, to: &S::Address, token_id: &TokenId, amount: u64, state: &mut impl TxState<S>) -> Result<()> {

        start_timer!(transfer_timer);
        
        // Perform the transfer
        self.do_transfer(from, to, token_id, amount, state)?;
        
        save_elapsed!(elapsed SINCE transfer_timer);
        
        #[cfg(feature = "native")]
        {
            // Track your custom metric
            track_metrics(|tracker| {
                tracker.submit_metric(TransferMetric {
                    from: from.to_string(),
                    to: to.to_string(),
                    token_id: token_id.clone(),
                    amount,
                    duration_ms: elapsed.as_millis() as u64,
                });
            });
        }
        
        Ok(())
    }
}

Best Practices

Note: While the SDK provides comprehensive metrics infrastructure, individual modules in the SDK don't currently use metrics directly. Most metrics are tracked at the system level (runner, sequencer, state transitions). The examples here show how you could add metrics to your custom modules.

  1. Always gate with #[cfg(feature = "native")] - Metrics are not available in zkVM
  2. Use meaningful measurement names
    • A lot of the packages that Sovereign SDK runs under the hood emit metrics. To make it easy to discern that the metrics come from a Sovereign SDK component, we follow the pattern of sov_ in our metric names. We recommend following the pattern sov_user_module_name_metric_type so that it's easy to discern user level metric types.
  3. Separate tags and fields properly:
  4. Track business-critical metrics:
    • Transaction volumes and types
    • Processing times for key operations
    • Error rates and types
  5. Avoid high-cardinality tags - Don't use unique identifiers like transaction hashes as tags

Logging

The SDK uses the tracing crate for structured logging, providing rich context and efficient filtering.

Important: Logs are emitted immediately when generated and are NOT rolled back if a transaction reverts. This means failed transactions will still have their logs recorded, which is useful for debugging or monitoring why transactions failed.

Basic Logging Patterns

// Adapted from the `Bank` module
use tracing::trace;

impl<S: Spec> MyModule<S> {
    pub(crate) fn freeze(
        &mut self,
        token_id: TokenId,
        context: &Context<S>,
        state: &mut impl TxState<S>,
    ) -> Result<()> {
        // Logging at the start of operation
        trace!(freezer = %sender, "Freeze token request");

        // Redundant code elided here...

        token
            .freeze(sender)
            .with_context(|| format!("Failed to freeze token_id={}", &token_id))?;

        self.tokens.set(&token_id, &token, state)?;

        // Logging at the end of operation
        trace!(
            freezer = %sender,
            %token_id,
            "Successfully froze tokens"
        );

        Ok(())
    }
}

Using Spans for Context

Spans are like invisible context that gets automatically attached to every log line within their scope. Instead of passing context like batch_id or user_id through every function call just so you can log it, you create a span at the top level and all logs within that span automatically include that context.

Think of spans as a way to say "everything that happens from here until the span ends is part of this operation." This is especially useful when debugging - you can filter logs by span fields to see everything that happened during a specific batch process or user request.

use tracing::instrument;

// Example 1: Using the #[instrument] macro (easiest way)
#[instrument(skip(self, state, items))]  // skip large/non-Debug types
fn process_batch(&self, batch_id: BatchId, items: Vec<Item>, state: &mut impl TxState<S>) -> Result<()> {
    // The #[instrument] macro automatically adds all function parameters (except skipped ones) to the span
    // So batch_id is automatically included in all logs within this function
    info!(item_count = items.len(), "Starting batch processing");
    
    for (idx, item) in items.iter().enumerate() {
        // This log will show: batch_id=123 item_id=456 "Processing item"
        trace!(item_index = idx, item_id = %item.id, "Processing item");
        self.process_single_item(item, state)?;
    }
    
    info!("Batch processing completed");
    Ok(())
}

// Example 2: Creating spans manually (when you need more control)
fn process_user_request(&self, user_id: UserId, request: Request) -> Result<()> {
    // Create a span with context that will be included in all logs
    let span = tracing::span!(
        tracing::Level::INFO,
        "user_request", // span name
        %user_id,
        request_type = %request.request_type()
    );
    
    // Enter the span - all logs from here will include user_id and request_type
    let _enter = span.enter();
    
    debug!("Validating request");
    self.validate_request(&request)?;
    
    debug!("Processing request");
    self.process(&request)?;
    
    info!("Request completed successfully");
    Ok(())
}

Log Levels

  • error! - Unrecoverable errors that affect module operation
  • warn! - Recoverable issues or unusual conditions
  • info! - High-level operations (tx processing, module lifecycle)
  • debug! - Detailed operational data (state changes, intermediate values)
  • trace! - Very detailed execution flow

Best Practices

  1. Structure your logs:

    // Good - structured, filterable
    debug!(user = %address, action = "deposit", amount = %value, "Processing deposit");
    
    // Avoid - unstructured string interpolation
    debug!("Processing deposit for {} of amount {}", address, value);
  2. Include relevant context:

    • Transaction/operation IDs
    • User addresses (when relevant)
    • Amounts and values
    • Error details
    • State transitions
  3. Don't log transaction reverts as errors or warnings: Transaction reverts are expected behavior. Log them at debug! level if needed for debugging:

    if balance < amount {
        debug!(
            user = %sender,
            requested = %amount,
            available = %balance,
            "Transfer failed due to insufficient balance"
        );
        return Err(anyhow::anyhow!("Insufficient balance"));
    }
  4. Keep frequently triggered logs at debug or trace level: Any log that gets triggered by every call to your module should use debug! or trace! to avoid log spam:

    // Good - routine operations at trace level
    trace!(method = "transfer", from = %sender, "Processing transfer request");
    
    // Bad - routine operations at info level will spam logs
    info!("Transfer request received");  // Don't do this for every call
  5. Use conditional logging for expensive operations:

    #[cfg(feature = "native")]
    fn debug_state(&self, state: &impl StateAccessor<S>) {
        if tracing::enabled!(tracing::Level::TRACE) {
            let total_accounts = self.count_accounts(state);
            let total_balance = self.calculate_total_balance(state);
            trace!(
                %total_accounts,
                %total_balance,
                "Module state snapshot"
            );
        }
    }

Set log levels via environment variables:

RUST_LOG=info,my_module=debug cargo run

Advanced Features

The Sovereign SDK includes many advanced features beyond the core functionality covered in this documentation.

To learn more about implementing these features in your rollup, just shoot us a message in our support channel or fill out our partner form and we'll reach out to you.

Performance & Reliability

  • Configurable delays – Enable instant cancels & oracle updates while throttling toxic flow
  • Automatic sequencer fail-over – Seamless failover across data centers ensures your soft-confirmations survive even the worst outages
  • Intra-block caching – Cache state that's repeatedly accessed throughout a block, eliminating redundant instantiation per transaction and significantly boosting performance
  • Dev-ops tooling – Production-ready observability and deployment tools

Integrations & Compatibility

  • Privy integration – Click-to-sign flow using Privy
  • Ethereum or Solana addresses and wallet support – Use any address format or wallet you prefer
  • Hyperlane integration – Bridge liquidity from any EVM, SVM, or Cosmos SDK chain
  • Multiple DA layers – Run with Celestia, Bitcoin, Solana, or bring your own DA solution
  • Multiple zkVM integrations – Leverage the zkVM that best suits your application's performance characteristics: Risc0, SP1 (or soon any other Rust-compatible zkVM)

We're happy to help you leverage these features to build production-ready rollups tailored to your exact requirements.

SDK Contributors

This section provides an overview of the Sovereign SDK aimed at core contributors to the framework. It describes the primary components of the SDK at the level of Rust crates.

Transaction Lifecyle Overview

The transaction lifecycle begins with a user. First, the user opens a frontend and gets some information about the current state of the blockchain. Then, they open their wallet and sign a message indicating what action they want to take.

Once a message is signed, it needs to be ordered before full nodes can execute it, so the user's next step is to contact a sequencer to post the transaction onto the DA layer.

The sequencer accepts a number of transactions and bundles them into a single Blob, which he sends to the DA layer for inclusion. This Blob is ultimately sent to a Proposer on the DA layer, who includes it in his block and gets it approved by the DA layer's validator set. Once consensus is reached on the DA layer block containing the sequencer's Blob, the full nodes of the rollup parse its contents and execute the transactions, computing a new rollup state.

Next, specialized actors ("provers" or "attesters") generate a proof of the new rollup state and post it onto the DA layer. Finally, light clients of the rollup (end-users and/or bridges on other blockchains) verify the proof and see the results of the transaction.

Diagram of the Transaction Lifecycle

SDK Design Philosophy

Now that we've established the basic transaction lifecycle, we have the background we need to really dig into the design of the Sovereign SDK.

At a high level, the design process for the SDK was essentially just tracing the transaction lifecycle diagram and asking two questions at each step:

  • "How do we implement this step so that we really 'inherit the security of the L1'?"
  • "Within those constraints, how do we build the SDK to accommodate the broadest range of use cases?"

Step 1: Retrieving Information

Before doing anything, users need to find out about the current state of the rollup. How can we enable that?

At this step, we have several conflicting goals and constraints:

  • We want the user's view of the rollup to be as up-to-date as possible
  • We want to provide the strongest possible guarantees that the user's view of state is correct
  • We want to minimize costs for the rollup
  • Users may not be willing/able to download more than a few hundred kilobytes of data or do any significant computation

Obviously, it's not possible to optimize all of these constraints simultaneously. So, in the Sovereign SDK, we allow developers some flexibility to pick the appropriate tradeoffs for their rollups - and we give end-users additional flexibility to choose the setup that works best for them.

In practice, that means that...

  • Developers can choose between Optimistic and ZK rollups, trading transaction cost for time-to-finality.
  • Users can choose between running a full node (instant state access, but expensive), running a light client (slower state access, but much cheaper and trustless) and trusting a full node (instant state access)

Step 2: Signing Transactions

The SDK supports several signing/verification modes. The standard choice for interacting with Sovereign SDK chains is our custom UniversalWallet, which is available as a Metamask snap and a Ledger app. The UniversalWallet integrates tightly with the Sovereign SDK to render transactions in human-readable format. However, many chains need compatibility with legacy formats like Ethereum RLP transactions or Solana instructions

We've made the pragmatic choice to be as compatible as possible with existing crypto wallets using our RuntimeAuthenticator abstraction. By implementing the RuntimeAuthenticatortrait, developers cab bring their own transaction deserialization and authorization logic. Even better, we allow rollups to support several different Authenticator implementations simultaneously. This allows developers to retain backward compatibility with legacy transaction formats, without compromising on support for their native functionality.

Step 3: Sequencing

Once a user has signed a transaction, we need to broadcast it to all full nodes of the rollup.

Since a primary design goal is to inherit the security of the underlying blockchain, we want to ensure that users are always able to fall back on the censorship resistance of the L1 if necessary. At the same time, we don't expect users to interact directly with the underlying blockchain in the normal case. The underlying blockchain will charge fees in its own token, and we don't need or want users of the rollup to be thinking about exchange rates and L1 gas limits.

We also need to protect the rollup from spam. In a standard blockchain, spam is handled by ensuring that everyone pays for the computation that the network does on their behalf. Transactions with invalid signatures are filtered out at the peer-to-peer layer and never get included in blocks. This means that an attacker wanting to spam the rollup has no asymmetric advantage. He can send invalid transactions to the few nodes he happens to be directly connected to, but they will just disconnect. The only way to get the entire blockchain network to process a transaction is to provide a valid signature and pay enough gas fees to cover the cost of execution.

In a rollup, things are different. Rollups inherit the consensus of an underlying blockchain which doesn't know about the transaction validity rules of the rollup. Since the underlying chain doesn't know the rules, it can't enforce them. So, we need to be prepared to deal with the fact that the rollup's ledger is dirty. This is bad news, because checking transaction signatures is expensive - especially in zero-knowledge. If we aren't careful, an attacker could flood the rollup's ledger with malformed transactions and force the entire network to pay to check thousands of invalid signatures.

This is where the sequencer comes in. Sequencers accept transactions from users and bundle them into Blobs, which get posted onto the L1. At the rollup level, we force all sequencers to register by locking up some tokens - and we ignore any transactions which aren't posted by a registered sequencer. If a sequencer's bundle includes any transactions which have invalid signatures, we slash his deposit and remove him from the registry. This solves two problems at once. Users don't need to worry about obtaining tokens to pay for inclusion on the DA layer, and the rollup gets builtin spam protection.

Unfortunately, this setup also gives sequencers a lot of power. Since the sequencer handles transactions before they've gone through the DA layer's consensus mechanism, he can re-order transactions - and potentially even halt the rollup by refusing to publish new transactions.

To mitigate this power, we need to put a couple of safeguards in the protocol.

First, we allow anyone to register as a sequencer depositing tokens into the sequencer registry. This is a significant departure from most existing rollups, which rely on a single trusted sequencer.

Second, we allow sequencers to register without sending a transaction through an existing sequencer. Specifically, we add a rule that the rollup will consider up to K extra blobs from unregisterd sequencers in each rollup block. If any of the first K "unregistered" blobs conform to a special format, then the rollup will interpret them as requests to register a new sequencer. By capping the number of unregistered blobs that we look at, we limit the usefulness of unregistered blobs as a DOS vector while still ensuring that honest sequencers can register relatively quickly in case of censorship.

Finally, we try to make sequencing competitive by distributing some of the fees from each transaction to the sequencer who included it. This incentivizes new sequencers to register if the quality of service is low.


Ok, that was a lot of information. Let's recap.

In the Sovereign SDK, sequencers are middlemen who post transactions onto the DA layer, but it's the DA layer which ultimately decides on the ordering of transactions. Anyone can register as a sequencer, but sequencers expose themselves to slashing if they include transactions with invalid signatures (or certain other kinds of obvious spam).

That covers a huge chunk of sequencing. But there are still two topics we haven't touched on: stateful validation, and soft confirmations.

Stateful Validation

Up to this point, we've been talking about transactions as if they're always either valid or invalid for all time, regardless of what's happening on the rollup. But in the real world (especially when there are many sequencers), that's not the case. To give just one example, it's entirely possible for an account to burn through all of its funds with a single transaction, leaving nothing to pay gas with the next time around. So, if two sequencers publish blobs at about the same time, it's very possible that the first blob will cause some tranasactions in the second one to become invalid.

This complicates our analysis. Previously, we assumed that a sequencer was malicious if he caused any invalid transactions to be processed. That meant that we could safely slash his deposit and move on whenever we encountered a validation error. But now, we can't make that assumption. Otherwise, sequencers would have to be extremely conservative about which transactions they included - since a malicious (or confused) user could potentially cause a sequencer to get slashed by sending conflicting transactions to two different sequencers at the same time.

On the other hand, we don't want to let sequencers get away with including transactions that they know are invalid. Otherwise, a malicious sequencer could include invalid transactions "for free", causing the rollup to do a bunch of wasted computation.

We address these issues by splitting transasction validation into two categories. Stateless validation (i.e. signature checks) happens first, and transactions which fail stateless validation are invalid forever. If a sequencer includes a transaction which is statelessly invalid, then we know he's malicious. After a transaction has passed stateless validation, we proceed to make some stateful checks (i.e. checking that the transaction isn't a duplicate, and that the account has enough funds to pay for gas). If these checks fail, we charge the sequencer a small fee - just enough to cover the cost of the validatoin.

This ensures that sequencers are incentivized to do their best to filter out invalid transactions, and that the rollup never does any computation without getting paid for it, without being unfairly punitive to sequencers.

Soft Confirmations

Now that we've talked about the minimum requirements for sequencer, we move on to soft-confirmations.

One of the biggest selling points of rollups today is the ability to tell users the outcome of the tranaction instantly. Under the hood, this experience is enabled by giving a single trusted sequencer a "lock" on the rollup state. Because he holds the lock, the sequencer can run a local simulation to determine the exact effect of a transaction before he posts it on the DA layer.

Unfortunately, this introduces a load bearing point of centralization. If the centralized sequencer becomes unavailable (or is malicious), the rollup halts and users have little recourse.

On existing rollups, this issue is somewhat mitigated by providing an "inbox" on the DA layer where users can send special "forced withdrawal" transactions. However, in most existing rollups these "forced" transactions are significantly less powerful than ordinary ones. (Users are often limited to only withdrawing funds) and the delay period before they are processed is long.

In the Sovereign SDK, we try to do better. Unfortunately, there's no way to enable soft confirmations without giving some entity a lock on (some subset of) the rollup state. So, this is exactly what we do. We allow rollup deployers to specify some special "preferred sequencer", which has a partial lock on the rollup state.

In order to protect users in case of a malicious sequencer, though, we make a few additional changes to the rollup.

First, we separate the rollup state into two subsets, "user" space and "kernel" space. The kernel state of the rollup is maintained programatically, and it depends directly on the headers of the latest DA layer blocks. Inside of the protected kernel state, the rollup maintains a list of all the blobs that have appeared on the DA layer, and the block number in which they appeared.

Second, we prevent access to the kernel state of the rollup during transaction execution. This prevents users from creating transactions that could accidentally invalidate soft-confirmations given by the sequencer, as well as preventing the sequencer from deleting forced transactions before they can be processsed.

Finally, we add two new invariants:

  1. Every blob which appears on the (canonical) DA chain will be processed within some fixed number of blocks

  2. All "forced" (non-preferred) transactions will be processed in the order they appeared on the DA layer

To help enforce these invariants, we add a concept of a "visible" slot number. The visible slot number is a nondecreasing integer which represents block number that the preferred sequencer observed when he started building his current bundle. Any "forced" blobs which appear on the DA layer are processed when the visible slot number advances beyond the number of the real slot in which they appeared.

Inside the rollup, we enforce that...

  • The visible slot number never lags behind the real slot number by more than some constant K slots

    • This ensures that "forced" transactions are always processed in a reasonable time frame
  • The visible slot number increments by at least one every time the preferred sequencer succesfully submits a blob. The sequencer may increment the virtual slot by more than one, but the maximum increment is bounded by a small constant (say, 10).

  • The visible slot number is never greater than the current (real) slot number

  • Transactions may only access information about the DA layer that was known at the time of their virtual slot's creation. Otherwise, users could write transactions whose outcome couldn't be predicted, making it impossible to give out soft confirmations. - For example, a user could say if current_block_hash % 2 == 1 { do_something() }, which has a different outcome depending on exactly which block it gets included in. Since the rollup sequencer is not the L1 block proposer, he doesn't know what block the transaction will get included in! By limiting transactions to accessing historical information, we avoid this issue.

What all of this means in practice is that...

  • The visible state never changes unless either the preferred sequencer submits a batch, or a timeout occurs (i.e. the visible slot lags too far). This ensures that the preferred sequencer always knows the exact state that he's building on top of.
  • An honest sequencer wants to keep the virtual slot number as close to the real slot number as possible. This way, he has more buffer to absorb downtime without the state changing. This reduces the risk of soft-confirmations being invalidated.
  • Honest sequencers can always give accurate soft confirmations, unless the DA layer experiences a liveness failure lasting more than K slots.
  • Transactions can access information about the underlying blockchain with the best latency that doesn't invalidate soft confirmations.

Handling Preferred Sequencer Failure

With the current design, the Sovereign SDK supports soft confirmations while providing a reasonably powerful forced transaction mechanism. We also provide some limited protection from a malicious sequencer. If the sequencer is malicious, he can - at worst - delay transaction processing by some constant number of blocks. He can't prevent forced transactions from being processed, and he can't selectively delay transactions.

We also provide some limited protection if the preferred sequencer commits a slashable offense. In this case, the rollup enters "recovery mode", where it reverts to standard "based" sequencing (where all sequencer are equal). In this mode, it advances the virtual slot number two-at-a-time until the rollup is caught up, at which point the rollup behaves as if there had never been a preferred sequencer.

In the future, we may also add slashing if the preferred sequencer gives "soft-confirmations" which turn out to be invalid, but this requires some additional design work.

Step 4: Execution

Once a transaction is sequenced, the rollup needs to process it.

At a high level, a Sovereign SDK transaction goes through the following sequence:

  1. (Stateless) Deserialization: Decoding the bytes of the transaction into meaningful components (signature, ChainID, etc)

  2. (Stateful) Pre-validation: Checking that the address which is claiming to have authorized the transaction exists and retrieving its preferences for authorization. For example, if the address is a multisig, fetch the set of public keys and the minimum number of signatures.

  3. (Usually Stateless) Authentication: Checking that the transaction is authorized. For example, checking that the signatures are valid.

  4. (Stateful) Authorization: Matching the results of the authentication and pre-validation steps to decide whether to execute. This step also reserves the funds to pay for gas used during transaction execution. --- State changes up to this point are irreversable. State changes beyond this point are either committed or reverted together

  5. (Stateful) Pre-dispatch hook: This hook allows all modules to inspect the transaction (and their own state) and do initialization before the transaction is executed. For example, a wallet module might use this hook to check the user's balance and store it for later retrieval. This hook may abort the transaction and revert any state changes by returning an Error.

  6. (Stateful) Execution: The transaction is dispatched to a single target module for execution. That module may invoke other modules if necessary during execution. If this call returns an error, all state changes from step 5 onward are reverted.

  7. (Stateful) Post-dispatch hook: This hook allows all modules to inspect their state and revert the transaction if necessary. If this call returns an error, all state changes from step 5 onward are reverted.

  8. (Stateful) Post-execution: After transaction execution, any unused gas is refunded to the payer

As described in the "Sequencing" documentation, sequencers are slashed if any of the two stateless steps fail. If either of the stateful steps prior to execution fail, the sequencer is penalized - but just enough to cover the cost of the work that has been done. If the transaction fails during execution, the costs are paid by the user (or whichever entity is sponsoring the gas cost of the transaction.)

For more details on execution, see [TODO]

Step 5: Proving

Once a transaction is executed, all of the rollup full nodes know the result instantly. Light clients, on the other hand need proof. In this section, we'll describe the different kinds of proof that the Sovereign SDK offers.

Zero-Knowledge Proofs

The most powerful configuration for a rollup is zero-knowledge mode. In this mode, light clients can trustlessly sync the chain with near-zero overhead and only minutes of lag behind the chain tip. This enables fast and trustless bridging between rollups, and between the rollup and the execution environment of its DA layer (if applicable).

In the Sovereign SDK, proving is asynchronous (meaning that we post raw transactions on the DA layer - so that full nodes can compute the rollup state even before a proof is generated). This means that light clients have a view of the state that lags a little bit behind full nodes.

Proof Statements

All zero-knowledge proofs have the form, "I know of an input such that...". In our case, the full statement is:

I know of a DA layer block with hash X (where X is a public input to the proof) and a rollup state root Y (where Y is another public input) such that the rollup transitions to state Z (another public input) when you apply its transaction processing rules.

To check this proof, a client of the rollup needs to check that the input block hash X corresponds to the next DA layer block, and that the input state root Y corresponds to the current rollup state. If so, the client can advance its view of the state from Y to Z.

This works great for a single block. But if a client needs to validate the entire history of the rollup, checking proofs of each block would get expensive. To alleviate this problem, we use recursive proofs to compress multiple block proofs into one. (A nice property of zero-knowledge proofs is that the work to verify a proof is roughly constant - so checking this recursive "aggregate" proof is no more expensive than checking the proof of a single block.)

Each AggregateProof is a statement of the form:

I know of a (previous) valid AggregateProof starting from A (the genesis block hash, a public input) with state root B (the rollup's genesis state, a public input) and ending at block hash C with state root D. And, I know of a sequence of valid proofs such that...

  • For each proof, the block header has the property that header.prev_hash is the hash of the previous header
  • For each proof, the input state root is the output root of the previous root.
  • The block header from the first proof has prev_hash == C
  • The first proof has has input state root D
  • The final proof in the chain has block hash A and output root B (where A and B are public inputs)

Incentives

Generating zero-knowledge proofs is expensive. So, if we want proofs to be generated, we need to incentivize proof creation in protocol, preferrably using the gas fees that users are already paying.

In a standard blockchain, the goal of transaction fees markets is to maximize consumer surplus. They achieve this by allocating a scarce resource (blockspace) to the people who value it most. Analysis shows that EIP-1559 is extremely good at solving this optimization problem in the setting where supply is fixed and demand varies rapidly. EIP-1559 adjusts the price of blockspace to the exact price level at which demand matches supply.

In zk-rollups, we have a slightly different setup. Our supply of blockspace is not constant. Instead, it's possible to invest more money in proving hardware in order to increase the rollup's throughput. However, bringing more prover capacity online takes time. Deals have to be negotiated, hardware provisioned, etc. So, in the short term, we model prover capacity as being fixed - and we use EIP-1559 to adjust demand to fit that target.

In the long run, we want to adjust the gas limit to reflect the actual capacity of available provers. (Note that this is not yet fully implemented). To facilitate this, we will track the rollup's gas usage and proving throughput (measured in gas per second) over time. If rollup blocks are full and provers are able to keep up, we will gradually increase the gas limit until blocks are no longer full or provers start to fall behind.

This still leaves one problem... how do we incentivize provers to bring more hardware online? After all, adding more hardware increases the gas limit, which increases the supply of blockspace. This causes congestion (and fees) to fall, increasing consumer surplus. But provers don't get paid in consumer surplus, they get paid in fees. So, adding more hardware hurts provers in two ways. It increases their costs, and it reduces the average fee level. This means that provers are incentivized to provide as little capacity as possible.

The way we handle this problem is by introducing competition. In Sovereign, we only reward the first prover to publish a valid proof of a block. Since proving is almost perfectly parallel, and provers are racing to prove the block first, a prover which adds slightly more capacity than its rivals experiences a disproportionate increase in rewards. This should encourage provers to bring as much capacity as possible.

Since we want to reward provers with funds on the rollup, we need consensus. (Otherwise, it would be trivial to cause a chain split by creating a fork which sent some rewards to a different prover.) So, we require provers to post their proofs on chain. The first prover to post a valid proof of a particular block gets rewarded with the majority of the base_fees collected from that block. This is a deviation from EIP-1559, where all base fees are burned. Intuitively, our construction is still safe because provers "burn" money in electricity and hardware costs in order to create proofs. However, we also burn a small proportion of base fees as insurance in case proving costs ever fall to negligble levels.

Once a prover has posted his proof on the DA layer, two things happen. First, full nodes read the proof and, if it's valid reward the prover. If it's invalid, the prover has his deposit slashed. (Just like a misbehaving sequencer. Also like sequencers, data posted by un-bonded entities is ignored.) Second, light clients of the rollup download and verify the proof, learning the state of the rollup. As an implementation detail, we require proofs which get posted on chain to be domain separated, so that light clients can download just the proofs from a rollup without also needing to fetch all of the transaction data.

Summary: The proving workflow

So, putting this all together, the proving workflow looks like this:

  1. A DA layer block is produced at height N. This block contains some rollup transactions.

  2. Full nodes immediately process the transactions and compute a new state.

  3. Provers begin generating a proof of block N.

  4. (About 15 minutes later) a prover creates a valid proof of block N. In the meantime, DA layer blocks N+1 through N+X have been produced.

    a. At this point, full nodes are aware of rollup state N+X, while light clients are still unaware of N

  5. The prover creates a new AggregateProof, which...

    a. Proves the validity of the proof of block N

    b. Proves the validity of the previous AggregateProof (which covered the rollup's history from genesis to block N-1)

    c. Optionally proves the validity of proofs of blocks N+1, N+2, ..., N+X, if such proofs are available. (Note that the AggregateProof must cover a contiguous range of blocks starting from genesis, but it may cover any number of blocks subject to that constraint.) For concreteness, suppose that in this case the prover includes blocks N+1 through N+5.

  6. The prover posts the new AggregateProof onto the DA layer at some height - call it N+30. At this point, full nodes are aware of state N+30 (which includes a reward for the prover), and light clients are aware of state N+5. At some point in the future, a proof of N+30 will be generated, at which point light clients will become aware of the prover's reward.

Optimistic Proofs

For some rollups, generating a full zero-knowledge proof is too expensive. For these applications, the Sovereign SDK offers Optimistic Mode, which allows developers to trade some light-client latency for lower costs. With a zk-rollup, light clients have a view of the state which lags behind by about 15 minutes (the time it takes to generate a) zero- knowledge proof. However, at the end of those 15 minutes, light clients know the state with cryptographic certainty.

In an optimistic rollup, light clients have a different experience. They get some indication of the new rollup state very quickly (usually in the very next block), but they need to wait much longer (usually about a day) to be sure that their new view is correct. And, even in this case, clients only have "cryptoeconomic" certainty about the new state.

Proving Setup

In an optimistic rollup, the "proofs" checked by light clients are not (usually) proofs at all. Instead, they are simple attestations. Attesters stake tokens on claims like "the state of the rollup at height N is X", and anyone who successfully challenges a claim gets to keep half of the staked tokens. (The other half are burned to prevent an attester from lying about the state and then challenging himself from another account and keeping his tokens). In exchange, for their role in the process, attesters are rewarded with some portion of the rollup's gas fees. This compensates attesters for the opportunity cost of locking their capital.

This mechanism explains why light clients can know the state quickly with some confidence right away, but they take time to reach full certainty. Once they've seen an attestation to a state, clients know that either the state is correct, or the attester is going to lose some amount of capital. As time goes by and no one challenges the assertion, their confidence grows until it reaches (near) certainty. (The point at which clients are certain about the outcome is usually called the "finality period" or "finality delay".)

The previous generation of optimistic rollups (including Optimism and Arbitrum) relies on running an on-chain bisection game over an execution trace to resolve disputes about the rollup state. This requires $log_2(n)$ rounds of interaction, where n is the length of the trace (i.e. a few hundred million). To handle the possibility of congestion or censorship, rollups need to set the timeout period of messages conservatively - which means that a dispute could take up to a week to resolve.

In the Sovereign SDK, we resolve disputes by generating a zero-knowledge proof of the outcome of the disputed block. Since this only requires one round of interaction, we don't need the same challenge delay. However, we do need to account for the fact that proving is a heavy process. Generating a proof might take a few hours, and proving services might be experiencing congestion. To minimize the risk, we plan to set the finality period conservatively at first (about one day) and reduce it over time as we gain confidence.

Otherwise, the overall proving setup is quite similar to that of a zk-rollup. Just as in zk-rollups, proofs (and attestations) are posted onto the DA layer so that we have consensus about who to reward and who to slash. And, just like a zk-rollup, optimistic proofs/attestations are posted into a separate "namespace" on the DA layer (if possible) so that light clients can avoid downloading transaction data. The only other significant distinction between optimistic and zk rollups in Sovereign is that optimistic rollups use block-level proofs to resolve disputes instead of generating aggregate proofs which go all the way to genesis.

Conclusion

In the Sovereign SDK, we try to provide security, flexibility, and performance in that order.

As a contributor, it's your job to maintain that hierarchy. Security must always come first. And in blockchain, security is mostly about incentives. Especially in blockchain, you get what you incentivize. If your rollup under-prices some valuable resource, you'll get spam. If you under pay for some service, that service won't be provided reliably.

This is why incentive management is so deeply baked into the SDK. Every step - from sequencing to proving to execution to finality - needs to be carefully orchestrated to keep the incentives of the participants in balance.

Once the setup is secure, our next priority is enabling the broadest set of use cases. We try to provide maximum flexibility, and abstract as much functionality as possible into reusable components. You can read more about how we achieve flexibility at the level of Rust code in the abstractions chapter.

Finally, we optimize performance. This means eliminating redundant computation, carefully managing state access patterns, and considering the strengths and weaknesses of zero-knowledge proofs systems.

Happy hacking!

Main Abstractions

This document provides an overview of the major abstractions offered by the SDK.

  • Rollup Interface (STF + DA service + DA verifier)
  • sov-modules (Runtime, Module, stf-blueprint w/ account abstraction, state abstractions)
  • sov-sequencer
  • sov-db
  • Rockbound

One of the most important principles in the Sovereign SDK is modularity. We believe strongly in separating rollups into their component parts and communicating through abstract interfaces. This allows us to iterate more quickly (since components are unaware of the implementation details of other components), and it also allows us to reuse components in contexts which are often quite different from the ones in which they were orginally designed.

In this chapter, we'll give a brief overview of the core abstractions of the Sovereign SDK

Native vs. ZK Execution

Perhaps the most fundamental abstraction in Sovereign is the separation between "native" code execution (which computes a new rollup state) and zero-knowledge verification of that state. Native execution is the experience you're used to. In native execution, you have full access to networking, disk, etc. In native mode, you typically trust data that you read from your own database, but not data that comes over the network.

Zero-knowledge execution looks similar. You write normal-looking Rust code to do CPU and memory operations - but under the hood, the environment is alien. In zero-knowledge execution, disk and network operations are impossible. Instead, all input is received from the (untrusted) machine generating the proof via a special syscall. So if you make a call that looks like a network access, you might not get a response from google.com. Instead, the prover will pick some arbitrary bytes to give back to you. The bytes might correspond to an actual response (i.e. if the prover is honest and made the network request for you) - but they might also be specially crafted to deceive you. So, in zero-knowledge mode, great care must be taken to avoid relying on unverified data from the prover.

In the Sovereign SDK, we try to share code between the "native" full node implementation and the zero-knowledge environment to the greatest extent possible. This minimizes surface area for bugs. However, a full node necessarily needs a lot of logic which is unnecessary (and undesirable) to execute in zero-knowledge. In the SDK, such code is gated behind a cargo feature called "native". This code includes RPC implementations, as well as logic to pre-process some data into formats which are easier for the zero-knowledge code to verify.

The Rollup Interface

If you squint hard enough, a zk-rollup is made of three separate components. There's an underlying blockchain ("Data Availability layer"), a set of transaction execution rules ("a State Transition Function") and a zero-knowledge proof system (a "ZKVM" for zero-knowledge virtual machine). In the abstract, it seems like it should be possible to take the same transaction processing logic (i.e. the EVM) and deploy it on top of many different DA layers. Similarly, you should be able to take the same execution logic and compile it down to several different proof systems - in the same way that you can take the same code an run it on Risc0 or SP1.

Unfortunately, separating these components can be tricky in practice. For example, the OP Stack relies on an Ethereum smart contract to enforce its censorship resistance guarantees - so, you can't easily take an OP stack rollup and deploy it on a non-EVM chain.

In the Sovereign SDK, flexibility is a primary design goal. So we take care to codify this separation of concerns into the framework from the very beginning. With Sovereign, it's possible to run any State Transition Function alongside any Da Service on top of any (rust-compatible) proof system and get a functional rollup. The rollup-interface crate is what makes this possible. Every other crate in the SDK depends on it, because it defines the core abstractions that are shared between all SDK rollups.

A digram showing how the rollup interface supports the entire Sovereign SDK

Inside of the rollup interface, the native vs zero-knowledge distinction appears in numerous places. For example, the DA layer abstraction has two components - a DaService, which runs as part of native full node execution and provides methods for fetching data from the underlying blockchain; and DaVerifier, which runs in zero-knowledge and verifies that the data being executed matches the provided DA block header.

How it Works

Essentially, the Sovereign SDK is just a generic function that does this:

fn run_rollup<Da: DaService, Zk: Zkvm, Stf: StateTransitionFunction>(self, da: Da, zkvm: Zk, business_logic: Stf) {
	loop {
		// Run some `native` code to get the data for execution
		let (block_data, block_header) = da.get_next_block();
		let (input_state, input_state_root) = self.db.get_state();
		// Run some zero-knowledge code to execute the block
		let proof = zkvm.prove(|| {
			// Check that the inputs match the provided commitments
			if !da.verify(block_data, block_header) || !input_state.verify(input_state_root) {
				panic!()
			};
			// Make the data commitments part of the public proof
			output!(block_header.hash(), input_state_root)
			let output_state_root = business_logic.run(block_data, input_state);
			// Add the output root to the public proof
			output!(output_state_root)
		});
		// Publish the proof onto the DA layer
		da.publish(proof);
	}
}

As you can see, most of the heavy lifting is done by the DA layer, the Zkvm and the rollup's business logic. The full node implementation is basically just glue holding these components together.

DA

As discussed above, the role of the DA layer is to order and publish data. To integrate with the Sovereign SDK, a DA layer needs to provide implementations of two core traits: DaService and DaVerifier.

DA Service

The DaService trait is usually just a thin wrapper around a DA layer's standard RPC client. This trait provides standardized methods for fetching data, generating merkle proofs, and publishing data. Because it interacts with the network, correct execution of this trait is not provable in zero-knowledge.

Instead, the work of verifying of the data provided by the DaService is offloaded to the DaVerifier trait. Since the DaService runs only in native code, its implementation is less concerned about efficiency than zero-knowledge code. It's also easier to patch, since updating the DaService does not require any light clients or bridges to update.

The DaService is the only component of the SDK responsible for publishing and fetching data. The SDK's node does not currently have a peer-to-peer network of its own. This dramatically simplifies the full node and reduces bandwidth requirements.

DA Verifier

The DaVerifier is the zero-knowledge-provable counterpart of the DaService. It is responsible for checking that the (untrusted) private inputs to a proof match the public commitment as efficiently as possible. It's common for the DaVerifier to offload some work to the DaService (i.e. as computing extra metadata) in order to reduce the amount of computation required by the DaVerifier.

At the level of Rust code, we encode the relationship between the DaVerifier and the DaService using a helper trait called DaSpec - which specifies the types on which both interfaces operate.

Zero Knowledge Virtual Machine ("Zkvm")

The Zkvm traits make a zk-snark system (like Risc0 or Sp1) compatible with the Sovereign SDK. Like the DA layer, we separate Zkvm traits into a native and zk version, plus a shared helper.

The ZkvmHost trait describes how a native computer executes an elf file (generated from Rust code) and generates a zero-knowledge proof. It also describes how the native machine passes private inputs (the "witness") into the execution.

The ZkvmGuest trait describes how a program running in zero-knowledge mode accepts inputs from the host machine.

Finally, the ZkVerifier trait describes how a proof generated by the host is verified. This trait is implemented by both the Host and the Guest, which is how we represent that proofs must be verifiable natively and recursively (i.e. inside another SNARK.)

State Transition

A StateTransitionFunction ("STF") is a trait which describes:

  1. How to initialize a rollup's state at genesis

  2. How to apply the data from the DA layer to generate a new state

In other words, the implementation of StateTransitionFunction is what defines the rollup's "business logic".

In the Sovereign SDK, we define a generic full node which can run any STF. As long as your logic implements the interface, we should be able to run it.

However, implementing the business logic of a rollup is extremely complicated. While it's relatively easy to roll your own implementation of the Da or Zkvm traits, building a secure STF from scratch is a massive undertaking. It's so complex, in fact, that we assume no one will ever do it - andthe vast majority of the Sovereign SDK's code is devoted to providing a generic implementation of an STF that developers can customize. (This STF is what we call the Sovereign module system, or sov-modules).

So if no one is ever going to implement the StateTransitionFunction interface, why bother maintaining it at all? One reason is for flexibility. Just because we don't expect anyone to roll their own STF doesn't mean that they won't. But a bigger motivation is to keep concerns separate. By hiding the implementation details of the rollup behind the STF interface, we build a firm abstraction barrier between it and the full node. This means that we're free to make breaking changes on either side of the wall (either in the node, or in the STF) without worrying about breaking the other component.

Sov Modules

Outside of the rollup interface, the most important abstraction is sov-modules. sov-modules is a pre-built STF with pluggable... modules. It does the heavy lifting of implementing a secure STF so that you can focus on the core logic of your application.

The Runtime

At the heart of any sov-modules rollup is the Runtime:

// An example runtime similar to the one used in our "standard" demo rollup
pub struct Runtime<S: Spec> {
    /// The Bank module implements fungible tokens, which are needed to charge `gas`
    pub bank: sov_bank::Bank<S>,
    /// The Sequencer Registry module is where we track which addresses can send batches to the rollup
    pub sequencer_registry: sov_sequencer_registry::SequencerRegistry<S>,
    /// The Prover Incentives module is where we reward provers who do useful work
    pub prover_incentives: sov_prover_incentives::ProverIncentives<S>,
    /// The Accounts module implements identities on the rollup. All of the other modules rely on it
	/// to link cryptographic keys to logical accounts
    pub accounts: sov_accounts::Accounts<S>,
	/// The NFT module provides an implementation of a non-fungible token standard. It's totally optional.
    pub nft: sov_nft_module::NonFungibleToken<S>,
    #[cfg_attr(feature = "native", cli_skip)]
    /// The EVM module lets the rollup run Ethereum smart contracts. It's totally optional.
    pub evm: sov_evm::Evm<S, Da>,
}

At the highest level, a runtime is "just" a collection of all the modules which are included in your rollup. Its job is to take Transactions and dispatch them to the appropriate module for execution.

Pretty much all rollups built with the sov-modules include the bank, the sequencer registry, and the accounts module in their Runtime. They also usually include one of sov_prover_incentives (if they're a zk-rollup) or sov_attester_incentives (if they're an Optimistic rollup).

You may also have noticed that the Runtime is generic over a Spec. This Spec describe the core types (addresses, hashers, cryptography) used by the rollup and the DA layer. Making your runtime generic over a Spec means that you can easily change DA layers, or swap any of the core primitives of your rollup. For example, a rollup can trivially switch from Ed25519 to secp256k1 for its signature scheme by changing the implementation of its Spec trait.

Modules

"Modules" are the things that process transactions. For example, the Bank module lets users transfer tokens to each other. And the EVM module implements a full Ethereum Virtual Machine that can process any valid Ethereum transaction.

A Module is just a rust struct that implements two traits called Module and ModuleInfo.

The Module trait

The Module trait is like a simplified version of the StateTransitionFunction. It describes how to initialize the module at the rollup's genesis, and how the module processes CallMessages received from users (i.e. how it processes transactions)

pub trait Module {
	// -- Some associated type definitions are omitted here --
	/// Module defined argument to the call method.
    type CallMessage: Debug;

    /// Genesis is called when a rollup is deployed and can be used to set initial state values in the module.
    fn genesis(
        &self,
        _config: &Self::Config,
        _working_set: &mut WorkingSet<Self::Spec>,
    ) -> Result<(), ModuleError>;

    /// Processes a transaction, updating the rollup state.
    fn call(&self,
        _message: Self::CallMessage,
        _context: &Context<Self::Spec>,
        _state: &mut impl TxState<S>,
    ) -> Result<CallResponse, ModuleError>;
}

You'll notice that the call function takes three arguments: an associated CallMessage type, a Context, and a WorkingSet.

  • The CallMessage type is the deserialized content of the user's transaction - and the module can pick any type to be its CallMessage. In most cases, modules use an enum with one variant for each action a user might want to take. For example, the Bank::CallMessage type has variants for minting, transferring, and burning tokens.

  • The Context type is relatively straightforward. It simply contains the address of the sequencer, who published the transaction, the identity of the transaction's signer, and the current block height.

  • The TxState is the most interesting of the three, but it needs a little bit of explanation. In the Sovereign SDK, the rust struct which implements a Module doesn't actually contain any state. Rather than holding actual values, the module simply defines the structure of some items in state. All of the actual state of the rollup is stored in the State object, which is in-memory layer on top of the rollup's database (in native mode) or merkle tree (in zk mode). The State abstraction handles commit/revert semantics for you, as well as taking responsibility for caching, deduplication, and automatic witness generation/checking. It also provides utilities for charging gas and emitting events.

The Accounts module provides a good example of a standard Module trait implementation.

pub enum CallMessage<S: Spec> {
    /// Updates a public key for the corresponding Account.
    /// The sender must be in possession of the new key.
    UpdatePublicKey(
        /// The new public key
        <S::CryptoSpec as CryptoSpec>::PublicKey,
        /// A valid signature from the new public key
        <S::CryptoSpec as CryptoSpec>::Signature,
    ),
}

impl<S: Spec> sov_modules_api::Module for Accounts<S> {
	// -- Some items ommitted here --
    fn call(
        &self,
        msg: Self::CallMessage,
        context: &Context<S>,
        working_set: &mut WorkingSet<S>,
    ) -> Result<sov_modules_api::CallResponse, Error> {
        match msg {
            call::CallMessage::UpdatePublicKey(new_pub_key, sig) => {
				// Find the account of the sender
				let pub_key = self.public_keys.get(context.sender(), working_set)?;
				let account = self.accounts.get(&pub_key, working_set);
				// Update the public key
				self.accounts.set(&new_pub_key, &account, working_set);
				self.public_keys
					.set(context.sender(), &new_pub_key, working_set);
				Ok(Default::default())
            }
        }
    }
}

The ModuleInfo trait

The ModuleInfo trait describes how the module interacts with the broader module system. Each module has a unique ID and stores its state under a unique prefix of the global key-value store provided by sov-modules

pub trait ModuleInfo {
    /// Returns id of the module.
    fn id(&self) -> &ModuleId;

    /// Returns the prefix where module state is stored.
    fn prefix(&self) -> ModulePrefix;

    /// Returns addresses of all the other modules this module is dependent on
    fn dependencies(&self) -> Vec<&ModuleId>;
}

Unlike the Module trait, its incredibly rare for developers to implement ModuleInfo by hand. Instead, it's strongly recommended to derive the ModuleInfo using our handy macro. A typical usage looks like this:

#[derive(ModuleInfo, Clone)]
pub struct Bank<S: sov_modules_api::Spec> {
    /// The id of the sov-bank module.
    #[id]
    pub(crate) id: ModuleId,

    /// The gas configuration of the sov-bank module.
    #[gas]
    pub(crate) gas: BankGasConfig<S::Gas>,

    /// A mapping of [`TokenId`]s to tokens in the sov-bank.
    #[state]
    pub(crate) tokens: sov_modules_api::StateMap<TokenId, Token<S>>,
}

This code automatically generates a unique ID for the bank module and stores it in the field of the module called id. It also initializes the StateMap "tokens" so that any keys stored in the map will be prefixed the with module's prefix. This prevents collisions in case a different module also declares a StateMap where the keys are TokenIds.

Module State

The Sovereign SDK provides three core abstractions for managing module state. A StateMap<K, V> maps arbitrary keys of type K to arbitrary values of type V. A StateValue<V> stores a value of type V. And a StateVec<V> store an arbitrary length vector of type V. All three types require their arguments to be serializable, since the values are stored in a merkle tree under the hood.

All three abstractions support changing the underlying encoding scheme but default to Borsh if no alternative is specified. To override the default, simply add an extra type parameter which implements the StateCodec trait. (i.e you might write StateValue<Da::BlockHeader, BcsCodec> to use the Bcs serialization scheme for block headers, since your library for DA layer types might only support serde-compatible serializers).

All state values are accessed through TxState. For example, you always write my_state_value.get(&mut state) to fetch a value. It's also important to remember that modifying a value that you read from state doesn't have any effect unless you call my_value.set(new, &mut working_set).

Merkle Tree Layout

sov-modules currently uses a generic Jellyfish Merkle Tree for its authenticated key-value store. (Generic because it can be configured to use any 32-byte hash function). In the near future, this JMT will be replaced with the Nearly Optimal Merkle Tree that is currently under development.

In the current implementation, the SDK implements storage by generating a unique (human-readable) key for each StateValue, using the hash of that key as a path in the merkle tree. For StateMaps, the serialization of the key is appended to that path. And for StateVecs, the index of the value is appended to the path.

For example, consider the following module:

// Suppose we're in the file my_crate/lib.rs
#[derive(ModuleInfo, Clone)]
pub struct Example<S: sov_modules_api::Spec> {
    #[id]
    pub(crate) id: ModuleId,
    #[state]
    pub(crate) some_value: sov_modules_api::StateValue<u8>,
    #[state]
    pub(crate) some_vec: sov_modules_api::StateVec<u64>,
    #[state]
    pub(crate) some_map: sov_modules_api::StateMap<String, String>,
}

The value of some_value would be stored at the path hash(b"my_crate/Example/some_value"). The value of the key "hello" in some_map would be stored at hash(b"my_crate/Example/some_map/⍰hello") (where ⍰hello represents the borsh encoding of the string "hello") etc.

However, this layout may change in future to provide better locality. For more details... ask Preston, I guess.

Exotic State Variants

In addition to the standard state store, we support two other kinds of state:

KernelStateValues or (maps/vecs) act identically to regular StateValues, but they're stored in a separate merkle tree which is more tightly access controlled. This mechanism allows the rollup to store data that is inaccessible during transaction execution, which is necessary to enable soft-confirmations without sacrificing censorship resistance. For more details, see the section on soft-confirmations in the transaction lifecycle documentation. The global "state root" returned by the sov-modules from the StateTransitionFunction implementation is the hash of the kernel state root with the regular state root. We do our best to hide this detail from users of the SDK, though. Merkle proofs are automatically generated against the global root, so users don't need to worry about which state trie there values are in.

AccessoryStateValue or (map/vec) types are similar to Kernel types except that their values are not readable from inside the state transition function at all. Under the hood, these value are stored in the rollup's database but not in either merkle tree. This is useful for creating data that will be served via RPC but never accessed again during execution - for example, the transaction receipts from an Ethereum block.

The STF Blueprint

The last key component of a sov-modules rollup is the stf-blueprint. This "blueprint" provides a generic implementation of a StateTransitionFunction in terms of a Runtime (described above) and a Kernel (which provides security-critical functionality like censorship resistance in a way that's isolated from the transaction execution logic).

The STF blueprint implements the following high-level workflow:

  1. Take all of the new data Blobs read from the DA layer and send them to the Kernel. The Kernel will return a list of deserialized Batches of transactions as well as the current gas price. (A "Batch" is a "Blob" sent by a registered sequencer that has been succesfully deserialized into a list of Transactions)
  • Note that the list of Batches returned by the Kernel does not necessarily correspond exactly to the incoming Blobs. The Kernel might decide to ignore some Blobs, or to store some in its internal state for "deferred" execution. It might also add some Batches saved from a previous slot.
  1. Run the begin_slot hook, allowing modules to execute any initialization logic

  2. For each batch initialize the sequencer reward to zero and run the begin_batch hook. Apply the transactions, rewarding or penalizing the sequencer as appropriate. Finally, run the end_batch hook

  3. Run the end_slot hook to allow modules to execute any final logic.

  4. Compute the state change set and state root based on the transactions that were executed.

  5. Execute the finalize hook, which allows modules to compute any summary information from the change set and make it available via RPC.

For more details on the process of applying individual transactions, see the transaction lifecycle document.

Sequencer Registration via Forced Inclusion

Forced inclusion is a strategic mechanism in rollups designed to circumvent sequencers that censor user transactions. It allows users to directly submit transaction batches to the Data Availability Layer instead of going through a sequencer.

The Sovereign SDK supports this feature under specific conditions and guidelines. Crucially, only "Register Sequencer" transactions are accepted for forced inclusion; all other types will be ignored. For more details, see the Rules section.

Usage

The Sovereign SDK limits the number of batches from unregistered sequencers processed per rollup slot. This measure limits the use of this mechanism as a denial-of-service (DOS) attack vector.

Process for Forced Registration

  1. Create a batch containing a valid "Register Sequencer" transaction.
  2. Submit the batch to the Data Availability layer.
  3. Rollup nodes collect and execute the transaction.
  4. If the transaction complies with all rules, the user is registered as a sequencer and can submit regular transaction batches.

Rules

To ensure forced inclusion requests are processed correctly, the following rules apply:

  • Transaction Limit: Only the first transaction in each batch is taken into account. Any additional transactions will be discarded.
  • Transaction Type: The transaction must be a "Register Sequencer" transaction.
  • Transaction Construction: The transaction must be properly formatted and comply with standard transaction rules.
  • Financial Requirements: Users must have enough funds to cover:
    • Pre-execution checks (including signature validation, deserialization and transaction type checks).
    • Transaction execution costs.
    • A bond required for sequencer registration.

Gas Specification

This document contains a detailed specification of the way gas is handled within Sovereign's SDK. We use <., .> to denote the scalar product of two multidimensional quantities.

Definition

Gas is an ubiquitous concept in the blockchain space. It is a measure of the computational effort required to perform an operation as part of a transaction execution context. This is used to prevent the network from getting spammed by regulating the use of computational resources by each participant in the network.

High level overview

We have drawn a lot of inspiration from the Ethereum gas model in our gas mechanism design. Given that Ethereum's gas is well understood and widely used in the crypto industry, we believe that this will help users onboard more easily while providing strong security guarantees out-of-the box. We have deliberately chosen to tweak some concepts that were ill-suited to the rollups built using Sovereing's SDK. In particular, sorted decreasing order of importance:

  • We are using multidimensional gas units and prices.
  • We plan to using a dynamic gas target. Otherwise, the rollups built with Sovereign's SDK follow the EIP-1559 specification by default.
  • Rollup transactions specify a max_fee, max_priority_fee_bips, and optional gas limit gas_limit. The semantics of these quantities roughtly match their definition in the EIP-1559 specification.
  • Transaction rewards are decomposed into base_fee and priority_fee. The base_fee is only partially burnt by default, the remaining amount is used to reward provers/attesters. The priority_fee is used to reward the block sequencers.
  • We are charging gas for every storage access within the module system by default.
  • Customers of the SDK will have access to wrappers that allow to charge gas for hash computation and signature checks.

A design for multidimensional gas

Sovereign SDK's rollups use multidimensional gas units and prices. For example, this allows developers to take into account the differences between native and zero-knowledge computational costs for the same operation. Indeed:

  • Hashing is orders of magnitude more expensive when performed inside a zero-knowledge circuit. The cost of proving the correct computation of two different Hash may also vary much more than the cost of computing the hash itself (Poseidon or MiMc vs Sha2).
  • Accessing a storage cell for the first time is much more expensive in zk mode than in native mode. But hot storage accesses are practically free in zero-knowledge.

In the Sovereign SDK, we currently meter consumption in two dimensions - compute and memory.

We have chosen to follow the multi-dimensional EIP-1559 design for the gas pricing adjustment formulas. In essence:

  • We are performing the gas price updates for each dimension separately. In other words, each dimension follows a separate uni-dimensional EIP-1559 gas price adjustment formula.
  • The gas price adjustment formula uses a gas_target reference, which is a uni-dimensional gas unit that is compared to the gas consumed gas_used. The gas_price is then adjusted to regulate the gas throughtput to get as close as possible to the gas_target. We have the following invariant: 0 <= gas_used_slot <= 2 * gas_target.
  • Contrarily to Ethereum, we are planning to design a dynamic gas_target. The value of the gas_target will vary slowly to follow the evolution of the rollup metrics we have described above. That way, Sovereign rollups can account for major technological improvements in computation (such as zk-proof generation throughtput), or storage cost.
  • Every transaction has to specify a scalar max_fee which is the maximum amount of gas tokens that can be used to execute a given transaction. Similarly, users have to specify a max_priority_fee_per_gas expressed in basis points which can be used to reward the transaction sequencer.
  • The final sequencer reward is: seq_reward = min(max_fee - <base_fee, gas_price>, max_priority_fee_per_gas * <base_fee, gas_price>).
  • Users can provide an optional gas_limit field which is a maximum amount of gas to be used for the transaction. This quantity is converted to a uni-dimensional remaining_funds quantity by taking the scalar product with the current gas_price.
  • If users provide the gas_limit, the rollup checks that <gas_limit, current_gas_price> <= max_fee (ie, the scalar product with the current gas_price). If the check fails, the associated transaction is not executed and the rollup raises a ReserveGasErrorReason::CurrentGasPriceTooHigh error.

Charging gas for state accesses.

State accessors such as the WorkingSet or the PreExecWorkingSet charge some gas whenever state is modified. If these accessors run out of gas, they return a StateAccessorError and the execution gets reverted (or the sequencer is penalized). Some state accessors - like StateCheckpoint, the TxScratchpad or the ApiStateAccessor - don't charge for gas for state accesses. In that case, the access methods return a Result<T, Infallible> type which can be unwrapped safely using unwrap_infallible.

For now, we are enforcing simple cached access patterns - we are refunding some gas if the value that is accessed/modified is hot (ie has been already accessed and is cached).

Gas rewards.

The gas consumed during transaction execution is used to reward both provers/attesters and block sequencers. The base_fee, ie the total amount of gas consumed by the transaction execution is partially burnt (the amount to burn is specified by the PERCENT_BASE_FEE_TO_BURN constant), and the remaining portion is locked in a reward pool to be redeemed by provers/attesters. The priority_fee is also partially burnt and used to reward block sequencers.

Additional data structures that can be used to charge gas.

We have a couple of additional data structures that can be used to charge gas. These are:

  • MeteredHasher: a wrapper structure that can be used to charge gas for hash computation.
  • MeteredSignature: a wrapper structure that can be used to charge gas for signature checks.
  • MeteredBorshDeserialize: a supertrait that can be used to charge gas for structures implementing BorshDeserialize.

Structure of the implementation

The core of the gas implementation is located within the sov-modules-api crate in the following modules/files:

  • module-system/sov-modules-api/src/common/gas.rs: contains the implementation of the Gas and GasMeter traits. These are the core interfaces that are consumed by the API. The Gas trait defines the way users can interact with multidimensional gas units. The GasMeter is the interface implemented by every data structure that contains or consumes gas (such as the WorkingSet which contains a TxGasMeter, or the PreExecWorkingSet that may contain a SequencerStakeMeter).
  • module-system/sov-modules-api/src/common/hash.rs: contains the implementation of the MeteredHasher which is a wrapper structure that can be used to charge gas for hash computation.
  • module-system/sov-modules-api/src/transaction.rs: contains the representation of the transaction type that is used within the SDK. These structures contain the max_fee, max_priority_fee_bips and gas_limit fields that represent the maximum amount of gas tokens to use for the transaction, the maximum priority fee to pay the sequencer (in basis points), and an optionnal multidimensional gas limit (ie the maximum amount of gas to be consumed for this transaction).

Outside of the sov-modules-api, within the module system:

  • module-system/module-implementations/sov-chain-state/src/gas.rs: compute_base_fee_per_gas contains the implementation of the gas price update which follows our modified version of the EIP-1559. The gas price is updated within the ChainState's module lifecycle hooks (ChainState::begin_slot_hook updates the gas price, ChainState::end_slot_hook updates the gas consumed by the transaction).
  • module-system/module-implementations/sov-sequencer-registry/src/capabilities.rs: contains the implementationn of the SequencerStakeMeter which is the data structure used to meter the sequencer stake before the transaction's execution starts.