Introduction
Welcome to the NexusForge documentation. This guide covers everything you need to build, deploy, and verify autonomous AI agents on the NexusForge protocol.
What is NexusForge?
NexusForge is a Solana-native protocol for deploying verifiable AI agents as Anchor programs, with every execution attested inside an AMD SEV-SNP enclave and proved with SP1 zkVM. Proofs land on Solana in a single slot and are verified on-chain using the alt_bn128 syscalls.
The protocol combines AMD SEV-SNP attestation, deterministic replay, and Groth16 over BN254 (compressed from SP1 STARKs) to give agents sub-second settlement and Jito bundle execution on Solana mainnet-beta.
Key Concepts
Anchor programs deployed to Solana mainnet-beta with per-agent PDAs. Agents are defined by declarative manifests and run inside AMD SEV-SNP enclaves on operator nodes.
Operators co-located with Solana leaders that provide AMD SEV-SNP enclaves for agent execution. Nodes bond 10,000 $FORGE into the registry PDA and earn rewards per epoch.
Every agent execution produces an SP1 STARK compressed into a Groth16 proof over BN254, verified on-chain through Solana's alt_bn128 syscalls. State commitments use Light Protocol ZK Compression.
Agents CPI into Jupiter v6, Pyth, Switchboard, Kamino, MarginFi, Drift, Orca Whirlpools, Raydium CLMM, Phoenix, Solend, Squads v4, and Realms — all in a single atomic transaction, bundled via Jito.
Quick Start
Get a NexusForge agent running on Solana mainnet-beta in under five minutes.
Prerequisites
- Node.js 18+ — Download
- Solana CLI 1.18+ — for signing and deploys
- Anchor 0.30+ — for compiling the agent program
- forge-cli — installed via npm (see below)
1. Install the CLI
npm install -g @nexusforge/forge-cli
solana config set --url https://api.mainnet-beta.solana.com
2. Initialize your agent
forge-cli init my-agent
This scaffolds a new agent project with the following configuration:
name: my-first-agent
version: 1.0.0
runtime: deterministic-v2
cluster: mainnet-beta
trigger:
type: slot
every: 150 # ~60s (150 slots at 400ms)
execution:
enclave: sev-snp
compute_units: 400000
priority_fee_lamports: 10000
verification:
proof_system: sp1
post_to: solana
recursion: groth16-bn254
3. Deploy to mainnet-beta
anchor build
forge-cli agent deploy ./agent.nexus.yaml --cluster mainnet-beta
4. Verify execution
forge-cli verify <agent-pda>
The CLI will fetch the latest proof from the verifier program and verify it locally. You should see:
✔ Proof verified successfully
Agent: my-first-agent
Slot: 268,294,017
Sig: 4rPjs1...c912
System: sp1 → groth16-bn254
Time: 142ms
Authentication
All API requests authenticate with an Ed25519 signature over the request body. The same keypair formats used by Solana wallets (Phantom, Solflare, Backpack) work directly — the protocol has no separate API key system.
Base URL
https://api.nexusforge.io/v2
Authentication Header
X-Forge-PubKey: <BASE58_PUBKEY>
X-Forge-Signature: <BASE58_ED25519_SIG>
X-Forge-Timestamp: <UNIX_SECONDS>
Signatures are computed over the canonical message timestamp + "." + method + " " + path + "\n" + body. You can sign with any Ed25519 library or interop with Phantom / Solflare / Backpack through @solana/wallet-adapter.
Example Request
curl -X GET https://api.nexusforge.io/v2/agents \
-H "X-Forge-PubKey: FoRGeAgnt1111111111111111111111111111111111" \
-H "X-Forge-Signature: 4rPjs1k8cBVnT9xQ...base58sig" \
-H "X-Forge-Timestamp: 1713312000" \
-H "Content-Type: application/json"
Rate Limits
| Plan | Rate Limit | Burst |
|---|---|---|
| Free | 100 req/min | 20 req/s |
| Pro | 1,000 req/min | 100 req/s |
| Enterprise | Unlimited | Custom |
Rate limit headers are included in every response:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1713312000
Agents API
Create, manage, and monitor agents programmatically.
/agents
Create and deploy a new agent.
{
"name": "price-oracle-agent",
"runtime": "deterministic-v2",
"cluster": "mainnet-beta",
"trigger": {
"type": "slot",
"every": 150
},
"execution": {
"enclave": "sev-snp",
"compute_units": 400000,
"priority_fee_lamports": 10000
},
"verification": {
"proof_system": "sp1",
"post_to": "solana",
"recursion": "groth16-bn254"
}
}
{
"id": "agt_7x9k2m4n",
"name": "price-oracle-agent",
"pda": "FoRGeAgnt1111111111111111111111111111111111",
"status": "deploying",
"created_at": "2026-04-16T12:00:00Z",
"cluster": "mainnet-beta",
"node": {
"id": "node_3f8a",
"region": "us-east-1",
"enclave": "sev-snp"
},
"verification": {
"proof_system": "sp1",
"post_to": "solana",
"verifier_program": "vrFyNxsFrg3E111111111111111111111111111111"
}
}
/agents/:id
Retrieve the current status and configuration of an agent.
{
"id": "agt_7x9k2m4n",
"pda": "FoRGeAgnt1111111111111111111111111111111111",
"name": "price-oracle-agent",
"status": "running",
"uptime": "4d 12h 33m",
"executions": 11842,
"last_proof": {
"slot": 268294017,
"signature": "4rPjs1k8cBVnT9xQ...c912",
"verified": true,
"timestamp": "2026-04-16T11:59:30Z"
}
}
/agents/:id/proofs
List all verification proofs generated by an agent. Supports pagination via cursor and limit query parameters.
{
"data": [
{
"proof_id": "prf_a1b2c3",
"slot": 268294017,
"signature": "4rPjs1k8cBVnT9xQ...c912",
"system": "sp1",
"recursion": "groth16-bn254",
"verified": true,
"compute_units_used": 284000,
"timestamp": "2026-04-16T11:59:30Z"
}
],
"pagination": {
"cursor": "eyJpZCI6MTAwfQ==",
"has_more": true
}
}
/agents/:id
Stop a running agent and deallocate its compute resources. This action is irreversible. A final proof is generated before shutdown.
{
"id": "agt_7x9k2m4n",
"status": "stopped",
"final_proof": "prf_z9y8x7",
"stopped_at": "2026-04-16T12:05:00Z"
}
Installation
This guide walks you through installing the NexusForge CLI and Solana toolchain, configuring your environment, and verifying that everything is working correctly.
System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Node.js | 18.0.0 | 20.x LTS |
| npm | 9.0.0 | 10.x |
| Operating System | macOS 12+, Ubuntu 20.04+, Debian 11+, Windows WSL2 | |
| Disk Space | 200 MB | 500 MB |
| Memory | 512 MB | 2 GB |
Note: Native Windows (non-WSL) is not currently supported. Windows users should install WSL2 with an Ubuntu distribution before proceeding.
Install via npm
The recommended way to install NexusForge is through the npm registry. This installs the CLI globally on your system. You will also need the Solana CLI and Anchor framework for building agent programs.
# Install the forge CLI
npm install -g @nexusforge/forge-cli
# Install Solana CLI 1.18+
sh -c "$(curl -sSfL https://release.solana.com/v1.18.22/install)"
# Install Anchor 0.30+
cargo install --git https://github.com/coral-xyz/anchor avm --locked
avm install 0.30.1 && avm use 0.30.1
Verify Installation
After installation completes, verify that the CLI is accessible and check the installed version:
forge-cli --version
solana --version
anchor --version
You should see output similar to:
forge-cli/2.4.1 linux-x64 node-v20.11.0
solana-cli 1.18.22
anchor-cli 0.30.1
Alternative: Docker Installation
If you prefer a containerized environment or need a reproducible CI/CD setup, you can use the official Docker image:
# Pull the latest stable image
docker pull nexusforge/cli:latest
# Run the CLI inside the container
docker run --rm -it \
-v $(pwd):/workspace \
-e FORGE_KEYPAIR=/workspace/id.json \
nexusforge/forge-cli:latest forge-cli --version
The Docker image bundles Node.js 20, the Solana CLI, Anchor, and forge-cli, with all required dependencies. It is based on node:20-slim and weighs approximately 380 MB.
Configure the CLI
Before you can deploy agents or interact with the NexusForge protocol, configure the Solana cluster and point forge-cli at a signing keypair. Any Solana keypair works — file-based, ledger, or Phantom / Solflare / Backpack via Wallet Adapter.
# Point Solana at mainnet-beta (or use a Helius RPC for higher limits)
solana config set --url https://mainnet.helius-rpc.com/?api-key=YOUR_KEY
# Tell forge-cli which keypair to sign with
forge-cli config set keypair ~/.config/solana/id.json
# Default cluster for all subsequent commands
forge-cli config set cluster mainnet-beta
# Verify the configuration
forge-cli config list
Expected output:
keypair: ~/.config/solana/id.json
pubkey: FoRGeAgnt1111111111111111111111111111111111
cluster: mainnet-beta
rpc: https://mainnet.helius-rpc.com/?api-key=****
log-level: info
telemetry: enabled
enclave: sev-snp (default)
Environment Variables
As an alternative to nexusforge config set, you can use environment variables. Environment variables take precedence over config file values.
| Variable | Description | Default |
|---|---|---|
FORGE_KEYPAIR |
Path to the Ed25519 keypair JSON used to sign CLI and SDK requests. Required for all on-chain operations. | ~/.config/solana/id.json |
FORGE_CLUSTER |
Target Solana cluster. Accepts mainnet-beta, devnet, or localnet (solana-test-validator). |
mainnet-beta |
FORGE_RPC_URL |
Override the RPC endpoint. Helius and Triton are recommended for production workloads. | https://api.mainnet-beta.solana.com |
FORGE_LOG_LEVEL |
Controls CLI output verbosity. Accepts silent, error, warn, info, debug, or trace. |
info |
# Export variables in your shell profile (.bashrc, .zshrc)
export FORGE_KEYPAIR="$HOME/.config/solana/id.json"
export FORGE_CLUSTER="mainnet-beta"
export FORGE_RPC_URL="https://mainnet.helius-rpc.com/?api-key=YOUR_KEY"
export FORGE_LOG_LEVEL="debug"
First Agent
In this tutorial, you will build a simple agent that reads the current SOL/USD price from a Pyth feed on Solana and logs it. By the end, your agent will be running on mainnet-beta with an SP1 proof of every execution verified on-chain.
1. Create a Project Directory
Start by scaffolding a new agent project using the CLI. This creates the required directory structure, an Anchor program, and a TypeScript handler.
forge-cli init sol-price-agent
cd sol-price-agent
The generated project structure looks like this:
sol-price-agent/
├── agent.nexus.yaml
├── programs/
│ └── sol-price-agent/ # Anchor program (Rust)
│ ├── Cargo.toml
│ └── src/lib.rs
├── src/
│ ├── handler.ts
│ └── utils.ts
├── tests/
│ └── handler.test.ts
├── Anchor.toml
├── package.json
└── tsconfig.json
2. Configure the Agent Manifest
Open agent.nexus.yaml and customize it for your use case. Below is a detailed breakdown of every field:
name: sol-price-agent
version: 1.0.0
description: "Reads the latest SOL/USD price from a Pyth feed"
# Runtime environment for deterministic execution
runtime: deterministic-v2
# Solana cluster this agent targets
cluster: mainnet-beta
# Trigger configuration — when the agent executes
trigger:
type: slot # Options: slot, cron, event, webhook
every: 150 # ~60s at 400ms slots
# Execution settings
execution:
enclave: sev-snp # AMD SEV-SNP enclave (required on mainnet)
compute_units: 200000
priority_fee_lamports: 10000
timeout_ms: 30000
retries: 3
# Proof generation and posting
verification:
proof_system: sp1
recursion: groth16-bn254 # Compressed for alt_bn128 verification
post_to: solana
batch_size: 10 # Aggregate 10 executions per proof
Specifies the deterministic execution environment version. deterministic-v2 is the current stable runtime with SBF support and 64 MB heap.
Defines when the agent executes. slot runs every N slots (400ms each). cron accepts standard cron expressions. event subscribes to an on-chain program log filter. webhook exposes an HTTPS endpoint.
The TEE type used for secure execution. sev-snp (AMD SEV-SNP with VCEK attestation) is required on mainnet-beta. sim skips enclave attestation and is intended for solana-test-validator / litesvm local development only.
The number of execution cycles to aggregate into a single recursive proof before posting on-chain. Higher values reduce compute-unit and priority-fee costs but increase proof latency.
3. Write the Agent Logic
Open src/handler.ts and replace the boilerplate with the following code. The handler is invoked on each execution cycle and receives a context object with a @solana/web3.js connection, the agent's Anchor program, and logging utilities.
import { AgentContext, ExecutionResult } from "@nexusforge/sdk";
import { PublicKey } from "@solana/web3.js";
import { PythHttpClient, getPythProgramKeyForCluster } from "@pythnetwork/client";
// Pyth SOL/USD price feed on Solana mainnet-beta
const SOL_USD_FEED = new PublicKey("H6ARHf6YXhGYeQfUzQNGk6rDNnLBQKrenN712K4AQJEG");
export default async function handler(ctx: AgentContext): Promise<ExecutionResult> {
// Read the latest price from Pyth
const pyth = new PythHttpClient(ctx.connection, getPythProgramKeyForCluster("mainnet-beta"));
const feed = await pyth.getAssetPricesFromAccounts([SOL_USD_FEED]);
const { price, publishTime } = feed[0];
const solPrice = Number(price);
const updatedDate = new Date(publishTime * 1000).toISOString();
ctx.log.info(`SOL/USD: $${solPrice.toFixed(2)} (updated ${updatedDate})`);
// Store result in the agent's PDA state (Light Protocol ZK Compression)
await ctx.state.set("sol_price", solPrice);
await ctx.state.set("last_updated", updatedDate);
return {
success: true,
data: { solPrice, updatedAt: updatedDate },
};
}
4. Test Locally
The forge-cli dev command starts a local development server that simulates the enclave environment on top of solana-test-validator (or litesvm for faster iteration). It uses the sim enclave mode and clones Pyth accounts from mainnet-beta.
forge-cli dev --clone H6ARHf6YXhGYeQfUzQNGk6rDNnLBQKrenN712K4AQJEG
You should see output similar to:
▶ Starting solana-test-validator...
▶ Enclave mode: sim (simulation)
▶ Cloning Pyth SOL/USD account from mainnet-beta...
✔ Agent loaded: sol-price-agent v1.0.0
✔ Trigger: slot (every 150 slots ≈ 60s)
✔ Dev server running at http://localhost:4830
[12:00:01] SOL/USD: $187.42 (updated 2026-04-16T11:59:48Z)
[12:01:01] SOL/USD: $187.61 (updated 2026-04-16T12:00:50Z)
5. Deploy to the Network
When you are satisfied with local testing, deploy the agent to mainnet-beta. The CLI compiles the Anchor program to SBF, uploads the TypeScript handler to operator nodes, and initializes the agent PDA.
anchor build
forge-cli agent deploy ./agent.nexus.yaml --cluster mainnet-beta
Deployment output:
▶ Building Anchor program...
▶ Program size: 142 KB
▶ Deploying to mainnet-beta via Jito bundle...
▶ Assigned operator: node_3f8a (us-east-1, sev-snp)
✔ Agent deployed successfully
ID: agt_r4t5u6v7
PDA: FoRGeAgnt1111111111111111111111111111111111
Status: running
Cluster: mainnet-beta
Trigger: slot (every 150 slots)
Proofs: SP1 → Groth16, batched (10 per proof)
Jito tip: 10,000 lamports
View in dashboard: https://app.nexusforge.io/agents/agt_r4t5u6v7
6. Monitor Your Agent
Use the forge-cli logs command to stream real-time logs from your running agent. The --follow flag keeps the stream open.
forge-cli logs agt_r4t5u6v7 --follow
You can also inspect specific execution cycles and their associated proofs:
# View the last 5 executions
forge-cli executions agt_r4t5u6v7 --limit 5
# Verify the latest proof on-chain
forge-cli verify agt_r4t5u6v7
Protocol Architecture
NexusForge is built on a three-layer architecture designed to separate concerns between execution, verification, and settlement. Each layer operates independently and communicates through well-defined interfaces, with all settlement happening natively on Solana mainnet-beta.
Architecture Overview
The protocol stack consists of three distinct layers, each with its own trust model and performance characteristics:
| Layer | Role | Trust Model | Components |
|---|---|---|---|
| Execution | Run agent code in isolated, attested environments | AMD SEV-SNP attestation + deterministic replay | SEV-SNP enclaves, SBF/WASM runtime, scheduler co-located with Solana leaders |
| Verification | Generate cryptographic proofs of correct execution | SP1 zkVM → Groth16 over BN254 | SP1 prover nodes, STARK-to-SNARK compressor, recursive aggregator |
| Settlement | Anchor proofs on-chain for public verifiability | Solana mainnet-beta single-slot finality | Verifier program, Jito bundle executor, Light Protocol ZK Compression state |
Execution Layer
The Execution Layer runs agent code securely and deterministically. When an agent is deployed, its compiled handler is loaded into an AMD SEV-SNP enclave on an operator node co-located with a Solana leader.
Agent code runs inside AMD SEV-SNP confidential VMs. The hardware encrypts VM memory and produces a VCEK-signed attestation report that is verified by the protocol before any output is accepted. The attestation report is posted on-chain alongside each proof batch.
NexusForge uses a custom runtime (deterministic-v2) that eliminates sources of non-determinism such as floating-point rounding, random number generation, and system clock access. All I/O operations go through a controlled host interface, and every RPC read is recorded in the execution trace for later replay.
The scheduler dispatches execution cycles based on trigger configuration (slot, cron, event, or webhook). Operators co-located with Solana leaders claim work via a signed slot-lease; if an operator drops a cycle, the scheduler transparently reassigns it to a healthy node with a fresh enclave. Urgent cycles land through Jito bundles with tipped priority fees.
Verification Layer
The Verification Layer transforms execution traces into compact zero-knowledge proofs that anyone can verify without re-executing the agent code.
After each execution cycle, the execution trace is fed into an SP1 zkVM prover. SP1 produces a STARK proof of correct execution in 2–8 seconds on a 32-core prover. The STARK is then compressed into a Groth16 proof over the BN254 curve so that it fits inside Solana's compute-unit limit.
Individual execution proofs are composed using SP1's recursion circuit. This lets the protocol batch hundreds of execution proofs into a single Groth16 proof, dramatically reducing on-chain verification cost. The recursion depth is configurable via the verification.batch_size parameter in the agent manifest.
The proof aggregator collects recursive proofs from multiple agents and merges them into a single aggregate Groth16 proof. Aggregate proofs are posted to the Solana verifier program on a fixed cadence (every slot for high-priority agents, up to 60s for standard). This amortizes compute-unit and priority-fee costs across all active agents.
Settlement Layer
The Settlement Layer anchors verification proofs on Solana, providing a public, tamper-proof record of agent execution integrity with single-slot finality.
Aggregate Groth16 proofs are submitted to the NexusForge Verifier program (vrFyNxsFrg3E111111111111111111111111111111). The program verifies the proof using Solana's alt_bn128 syscalls (pairing check over BN254) and emits a ProofVerified CPI event. Verification consumes ~180,000 compute units per proof (roughly 0.000012 SOL at 5,000 lamports/CU priority fee).
Each proof includes a Light Protocol ZK-compressed state root for the agent's PDA. Compressed state lets an agent store millions of per-user entries without paying per-account rent. State roots are indexed by the protocol's indexer for efficient historical queries.
For agents that need data from other chains, NexusForge consumes Wormhole messages and Pyth cross-chain price updates, but all settlement is on Solana. Outbound cross-chain writes are out of scope — the protocol is Solana-native.
Data Flow
The following table illustrates the end-to-end lifecycle of a single agent execution as it flows through all three layers:
| Step | Layer | Action | Output |
|---|---|---|---|
| 1 | Execution | Scheduler triggers agent cycle | Execution dispatched to enclave |
| 2 | Execution | Agent code runs in TEE, reads/writes chain state | Execution trace + result |
| 3 | Verification | Execution trace is proved via SP1 zkVM | Individual STARK proof |
| 4 | Verification | STARK proofs recursively composed, then compressed to Groth16 over BN254 | Aggregate Groth16 proof |
| 5 | Settlement | Aggregate proof posted via Jito bundle to verifier program | alt_bn128 pairing check + ProofVerified event |
| 6 | Settlement | Light Protocol ZK-compressed state root committed to agent PDA | Updated compressed state root |
Execution Model
NexusForge agents execute inside hardware-isolated enclaves with deterministic guarantees. This section explains how code runs, how resources are metered, and what happens when things go wrong.
TEE Enclave Execution
When an agent is assigned to an operator, its handler bundle is loaded into an AMD SEV-SNP confidential VM. Mainnet-beta requires SEV-SNP; other TEEs are not eligible.
| Enclave Type | Hardware | Isolation Model | Max Memory | Attestation |
|---|---|---|---|---|
| AMD SEV-SNP | AMD EPYC (Milan / Genoa) | VM-level (confidential VMs) | 64 GB encrypted per VM | VCEK-signed SEV-SNP attestation report |
Before any execution output is accepted by the protocol, the enclave must produce a VCEK-signed SEV-SNP attestation report. The report is verified by an on-chain attestation service that maintains an allowlist of known-good launch measurements and the current AMD TCB version. Reports are posted alongside each proof batch and checked on-chain via the verifier program.
Deterministic Execution Guarantees
The deterministic-v2 runtime enforces strict determinism to ensure that any party can replay an execution and arrive at the same result. The following sources of non-determinism are eliminated:
- Floating-point arithmetic — replaced with fixed-point math using 256-bit integers. IEEE 754 operations are trapped and rejected at compile time.
- System clock — replaced with a logical clock derived from the Solana slot and trigger.
Date.now()always returns the cycle's canonical slot timestamp. - Random number generation —
Math.random()is seeded with a deterministic value derived from the agent PDA, cycle number, and recent blockhash. - External I/O — all RPC reads (
getAccountInfo,getMultipleAccounts, Pyth/Switchboard pulls) are intercepted by the runtime host and their responses are recorded in the execution trace. During replay, responses are served from the trace instead of making live requests. - Memory allocation — the runtime's linear memory allocator uses a fixed-order buddy system that produces identical allocation layouts across runs.
Compute Units and Resource Limits
Each agent execution cycle is metered against Solana's native compute-unit budget as well as NexusForge's internal trace-step counter (used when sizing SP1 proofs). The metering system tracks three resources:
| Resource | Unit | Default Limit | Max Allowed |
|---|---|---|---|
| On-chain compute | Solana compute units | 400,000 | 1,400,000 (per tx) |
| Off-chain trace | SP1 zkVM cycles | 2,000,000 | 20,000,000 |
| Memory | Pages (64 KB each) | 1,024 (64 MB) | 4,096 (256 MB) |
| RPC reads | Account fetches per cycle | 50 | 500 |
If an agent exceeds any limit, the execution is halted with an OUT_OF_COMPUTE error. The partial execution trace is still recorded and proved, but the agent's state is not updated. You can configure limits in agent.nexus.yaml:
execution:
enclave: sev-snp
compute_units: 600000 # Solana compute-unit cap per tx
priority_fee_lamports: 20000
max_trace_cycles: 5000000 # SP1 zkVM cycle budget
max_memory: 2048 # Memory pages (128 MB)
max_rpc_reads: 100
timeout_ms: 60000
Execution Lifecycle
Every agent execution cycle follows a four-phase lifecycle:
The runtime loads the agent's handler bundle into the SEV-SNP VM, restores the compressed state from the agent PDA via Light Protocol, and initializes a @solana/web3.js connection pinned to a Helius / Triton endpoint. The enclave produces a fresh VCEK attestation report.
The agent's handler function is invoked with a context object. The agent reads Solana accounts, optionally CPIs into Jupiter / Pyth / Kamino / Drift, updates its local state, and returns an ExecutionResult. All reads and CPIs are recorded in the execution trace.
The execution trace is fed into the SP1 zkVM. The prover generates a STARK proof of correct execution, then compresses it into a Groth16 proof over BN254 so it fits inside Solana's compute-unit budget.
The proof (or a batch of aggregated proofs) is bundled via Jito with a tipped priority fee and sent to the verifier program. The program runs an alt_bn128 pairing check, updates the agent's compressed state root, and emits a ProofVerified event.
Error Handling and Retry Policy
NexusForge implements an automatic retry policy for transient execution failures. The retry behavior is configurable per agent:
| Error Type | Retryable | Default Retries | Backoff |
|---|---|---|---|
| RPC timeout | Yes | 3 | Exponential (1s, 2s, 4s) |
| Bundle dropped (Jito) | Yes | 2 | Fixed (next slot + higher tip) |
| Out of compute units | No | — | — |
| Handler exception | No | — | — |
| Attestation failure | Yes (node reassign) | 1 | Immediate |
| State store unavailable | Yes | 5 | Exponential (500ms base) |
When all retries are exhausted, the execution cycle is marked as failed and the agent's status transitions to degraded. A webhook notification is sent if configured. The agent continues to execute on the next trigger cycle.
Execution Config Example
The following example shows a complete execution configuration for a high-throughput Jupiter arbitrage agent scanning every slot:
name: jupiter-arb-scanner
version: 2.1.0
runtime: deterministic-v2
cluster: mainnet-beta
trigger:
type: slot
every: 1 # Every slot (~400ms)
execution:
enclave: sev-snp
compute_units: 1200000
priority_fee_lamports: 50000
max_memory: 4096 # 256 MB
max_rpc_reads: 200
timeout_ms: 15000
retries: 2
retry_backoff: exponential
retry_base_delay_ms: 500
priority: high # Uses Jito bundle fast lane
verification:
proof_system: sp1
recursion: groth16-bn254
post_to: solana
batch_size: 50 # Aggregate 50 cycles per proof
recursive: true
Verification
Verification is the core trust mechanism of NexusForge. Every agent execution produces a zero-knowledge proof that cryptographically guarantees the computation was performed correctly, without revealing the agent's internal state or logic.
How SP1 Proofs Work
NexusForge uses SP1, a RISC-V zkVM with FRI-based STARK proofs, then compresses the final proof into a Groth16 SNARK over the BN254 curve. Solana's alt_bn128 syscalls (pairing and EC ops over BN254) let the verifier program check the proof for well under a single transaction's compute budget. The proving pipeline works as follows:
- The agent's handler is compiled for RISC-V and executed inside the SP1 zkVM, producing an execution trace of all intermediate steps.
- Constraint polynomials enforce that each RISC-V transition is valid and that the trace matches the committed program binary.
- SP1's recursion circuit composes many per-cycle STARKs into a single compressed STARK proof.
- The compressed STARK is then wrapped in a Groth16 proof over BN254 using a SNARK-over-STARK circuit, shrinking the on-chain payload to <260 bytes.
- The Solana verifier program performs a pairing check via
alt_bn128syscalls in <200k compute units regardless of the original trace length.
Proof Structure
Each NexusForge proof contains three components:
A columnar representation of every computational step performed by the agent. The trace is not included in the proof itself (it would be too large) but is committed to via polynomial commitments. The prover must possess the full trace to generate the proof, but the verifier only needs the commitment.
Private inputs to the computation, including the agent's internal state, secret keys used for signing, and any confidential data accessed during execution. The witness is never revealed — the ZK property ensures that the proof can be verified without knowledge of the witness.
The publicly visible data that the proof is bound to. This includes the agent ID, the execution cycle number, the input state root, the output state root, and the hash of all external I/O operations. Anyone can read the public inputs and verify that the proof corresponds to a specific execution.
On-Chain Verification
Proofs are verified on-chain by the NexusForge Verifier program. Verification is non-interactive and requires only the proof bytes and the public inputs.
| Component | Program / Address | Compute Units | Estimated Fee |
|---|---|---|---|
| Verifier program | vrFyNxsFrg3E111111111111111111111111111111 |
~180,000 CU | ~$0.00002 base + priority tip |
| Agent registry | FoRGeAgnt1111111111111111111111111111111111 |
~40,000 CU (state update) | — |
| Jito bundle tip | Jito Block Engine | — | 10,000–50,000 lamports per bundle |
The pairing check runs through Solana's alt_bn128 syscalls (sol_alt_bn128_addition, sol_alt_bn128_multiplication, sol_alt_bn128_pairing). A full verification — including reading the public inputs from the agent PDA and updating the compressed state root — stays under 250,000 compute units, leaving headroom for CPIs into Jupiter, Pyth, or other programs in the same transaction.
Recursive Proof Composition
One of SP1's key strengths is efficient recursive composition. A recursive proof is a proof that verifies other proofs. NexusForge uses recursion in two stages:
- Agent-level recursion — Multiple execution cycles from the same agent are composed into a single STARK. For example, with
batch_size: 10, ten execution proofs are recursively merged into one proof that attests to the correctness of all ten cycles. - Protocol-level aggregation — Recursive STARKs from multiple agents are wrapped in a single Groth16 proof. This is posted on-chain once per aggregation window (every slot for high-priority agents; up to 60s for standard).
Recursive composition reduces on-chain cost by a factor proportional to the batch size. With 100 agents each batching 10 cycles, a single on-chain verification (~180k CU) covers 1,000 individual execution proofs.
Verifying Proofs Programmatically
You can verify NexusForge proofs in your own applications using the TypeScript SDK, or by CPIing into the verifier program directly from your own Anchor program.
import { NexusForge } from "@nexusforge/sdk";
import { Connection } from "@solana/web3.js";
const connection = new Connection("https://mainnet.helius-rpc.com/?api-key=YOUR_KEY", "confirmed");
const forge = new NexusForge({ connection });
async function verifyLatestProof(agentPda: string) {
// Fetch the latest proof for the agent
const proof = await forge.proofs.getLatest(agentPda);
console.log("Proof ID: ", proof.id);
console.log("Slot: ", proof.slot);
console.log("System: ", proof.system); // "sp1"
console.log("Public Inputs: ", proof.publicInputs);
// Verify the proof locally (off-chain alt_bn128 simulator)
const result = await forge.proofs.verify(proof);
if (result.valid) {
console.log("✔ Proof is valid");
console.log(" State root:", result.stateRoot);
console.log(" Verified in:", result.duration, "ms");
} else {
console.error("✘ Proof verification failed:", result.error);
}
}
verifyLatestProof("FoRGeAgnt1111111111111111111111111111111111");
To verify a proof inside your own Anchor program, CPI into the verifier:
use anchor_lang::prelude::*;
use nexusforge_verifier::cpi::accounts::Verify;
use nexusforge_verifier::cpi::verify;
pub fn consume_forge_proof(ctx: Context<ConsumeProof>, proof: Vec<u8>) -> Result<()> {
let cpi_program = ctx.accounts.verifier_program.to_account_info();
let cpi_accounts = Verify {
agent: ctx.accounts.agent.to_account_info(),
state_root: ctx.accounts.state_root.to_account_info(),
instructions: ctx.accounts.instructions.to_account_info(),
};
let cpi_ctx = CpiContext::new(cpi_program, cpi_accounts);
// Runs alt_bn128 pairing check inside the verifier program
verify(cpi_ctx, proof, ctx.accounts.agent.key().to_bytes())?;
Ok(())
}
Proof Explorer
NexusForge provides a public proof explorer where you can browse, search, and verify any proof generated on the network. The explorer displays proof metadata, public inputs, verification status, and links to the on-chain transaction.
Visit the Proof Explorer at https://explorer.nexusforge.io/proofs.
Cross-Chain Messaging
NexusForge agents can read and write state across multiple blockchains through a unified cross-chain messaging layer. Messages are routed by a decentralized relay network and backed by the same ZK proof system used for agent execution.
Supported Chains
The following table lists all chains currently supported by the NexusForge cross-chain messaging layer:
| Chain | Chain ID | Status | Avg. Finality | Messaging Contract |
|---|---|---|---|---|
| Ethereum | 1 | Mainnet | ~13 min | 0xa1b2...c3d4 |
| Solana | — | Mainnet | ~400 ms | NxFMsg...7kPQ |
| Base | 8453 | Mainnet | ~2 s | 0xd4e5...f6a7 |
| Arbitrum | 42161 | Mainnet | ~250 ms | 0xb8c9...d0e1 |
| Optimism | 10 | Mainnet | ~2 s | 0xf2a3...b4c5 |
| Polygon | 137 | Mainnet | ~2 s | 0x6d7e...8f9a |
| Avalanche | 43114 | Mainnet | ~1 s | 0x0b1c...2d3e |
| BSC | 56 | Mainnet | ~3 s | 0x4f5a...6b7c |
| Sei | 1329 | Testnet | ~400 ms | 0x8d9e...0f1a |
| Monad | — | Testnet | ~500 ms | 0x2b3c...4d5e |
| Berachain | 80094 | Testnet | ~1 s | 0x6f7a...8b9c |
How Cross-Chain Messages Work
When an agent sends a cross-chain message, the following process occurs:
- Emission: The agent calls
ctx.send()with the target chain, destination address, and payload. The message is recorded in the execution trace and included in the ZK proof for the current cycle. - Attestation: Once the execution proof is verified on the source chain, the message is picked up by the relay network. Relayers are decentralized node operators who stake NXF tokens and earn fees for delivering messages.
- Relay: The relayer submits the message and its proof to the NexusForge Messaging contract on the destination chain. The contract verifies the proof before accepting the message.
- Delivery: The destination contract emits a
MessageDeliveredevent and optionally calls a callback function on the target address. The message is now available for the recipient to process.
Message Format
Cross-chain messages follow a standardized envelope format:
{
"id": "msg_x7y8z9a0",
"source": {
"chain": "ethereum",
"agent": "agt_r4t5u6v7",
"cycle": 1842
},
"destination": {
"chain": "base",
"address": "0x742d35Cc6634C0532925a3b844Bc9e7595f2bD18"
},
"payload": "0x...", // ABI-encoded calldata
"nonce": 47, // Per-agent sequential nonce
"proof_ref": "prf_a1b2c3", // Reference to the execution proof
"max_gas": 200000, // Gas limit for destination execution
"timestamp": 1713264000 // Source chain timestamp
}
Routing and Latency
Message routing is automatic — the protocol selects the fastest available relay path based on current network conditions. Expected end-to-end latency (from ctx.send() to MessageDelivered) depends on the source and destination chains:
| Source | Destination | Expected Latency |
|---|---|---|
| Ethereum | Base / Arbitrum / Optimism | ~15 minutes (waits for L1 finality) |
| Base / Arbitrum / Optimism | Ethereum | ~20 minutes (includes L2 confirmation + relay) |
| L2 ↔ L2 | Base, Arbitrum, Optimism | ~3 minutes (routed via shared sequencer) |
| Any EVM | Solana | ~2 minutes |
| Solana | Any EVM | ~3 minutes |
| L2 ↔ L2 | Same ecosystem (e.g., OP Stack) | ~30 seconds (fast path) |
For time-critical applications, you can opt into the fast path by setting priority: "fast" in the send options. Fast-path messages rely on optimistic relaying with a short challenge window (30 seconds) instead of waiting for full proof verification. Fast-path messages incur a higher relay fee.
Sending a Cross-Chain Message
The following example shows how to send a cross-chain message from an agent handler. The agent reads a price on Ethereum and sends it to a contract on Base.
import { AgentContext, ExecutionResult } from "@nexusforge/sdk";
const PRICE_FEED = "0x5f4eC3Df9cbd43714FE2740f5E3616155c5b8419";
const FEED_ABI = [
"function latestRoundData() view returns (uint80, int256, uint256, uint256, uint80)"
];
// Receiver contract on Base
const BASE_RECEIVER = "0x742d35Cc6634C0532925a3b844Bc9e7595f2bD18";
const RECEIVER_ABI = [
"function updatePrice(uint256 price, uint256 timestamp)"
];
export default async function handler(ctx: AgentContext): Promise<ExecutionResult> {
// Read ETH price from Chainlink on Ethereum
const feed = ctx.chain("ethereum").contract(PRICE_FEED, FEED_ABI);
const [, answer, , updatedAt] = await feed.read("latestRoundData");
const ethPrice = BigInt(answer.toString());
const timestamp = BigInt(updatedAt.toString());
ctx.log.info(`ETH/USD: ${Number(ethPrice) / 1e8} — sending to Base`);
// Send cross-chain message to Base
const receipt = await ctx.send({
chain: "base",
to: BASE_RECEIVER,
abi: RECEIVER_ABI,
method: "updatePrice",
args: [ethPrice, timestamp],
gasLimit: 150000,
priority: "standard", // or "fast" for optimistic relay
});
ctx.log.info(`Message sent: ${receipt.messageId}`);
ctx.log.info(`Expected delivery: ~${receipt.estimatedLatency}s`);
return {
success: true,
data: {
ethPrice: Number(ethPrice) / 1e8,
messageId: receipt.messageId,
destinationChain: "base",
},
};
}
The ctx.send() call is recorded in the execution trace and included in the ZK proof. The message is only relayed after the proof is verified on the source chain, ensuring that only valid agent outputs can trigger cross-chain actions.
You can track message delivery status using the CLI or the API:
# Check message status
nexusforge message status msg_x7y8z9a0
# List all messages sent by an agent
nexusforge messages agt_r4t5u6v7 --limit 10
Agent Manifests
Every NexusForge agent is defined by a declarative manifest file — agent.config.yaml. The manifest specifies what the agent does, which chains it operates on, how it is triggered, and how its execution is verified.
Manifest Schema Reference
The manifest follows a strict schema that is validated before deployment. Below is a complete reference of every available field.
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Unique identifier for the agent. Must be lowercase alphanumeric with hyphens, 3–64 characters. |
version |
string | Yes | Semantic version of the agent (e.g. 1.0.0). Used for upgrade tracking and rollback. |
schema |
string | No | Manifest schema version. Defaults to v2. Accepted values: v1, v2. |
runtime |
string | Yes | Execution runtime. Options: deterministic-v2, deterministic-v1 (deprecated), wasm-sandbox. |
description |
string | No | Human-readable description displayed in the dashboard and registry. |
chains |
string[] | Yes | List of chains the agent interacts with. At least one required. |
trigger.type |
string | Yes | How the agent is activated. Options: interval, event, cron, manual. |
trigger.every |
string | Conditional | Interval duration (e.g. 30s, 5m). Required when trigger.type is interval. |
trigger.event |
object | Conditional | Event filter config (contract address, event signature, chain). Required when trigger.type is event. |
trigger.cron |
string | Conditional | Cron expression (e.g. */5 * * * *). Required when trigger.type is cron. |
execution.enclave |
string | Yes | TEE platform. Options: sgx, sev, trustzone. |
execution.max_gas |
integer | No | Maximum gas per execution cycle. Default: 500000. |
execution.timeout |
string | No | Maximum execution duration per cycle. Default: 30s. Max: 5m. |
execution.memory_limit |
string | No | Maximum memory allocation. Default: 256MB. Max: 2GB. |
verification.proof_system |
string | Yes | Proof backend. Options: plonky3, sp1, risc0. |
verification.post_to |
string | Yes | Chain where proofs are anchored. |
verification.frequency |
string | No | How often proofs are posted. Options: every (each execution), batch (batched every N executions). Default: every. |
permissions |
object | No | On-chain permissions. Sub-fields: allowlist, max_value_per_tx, daily_limit. |
secrets |
string[] | No | References to encrypted secrets stored in the NexusForge vault. Injected at runtime inside the enclave. |
dependencies |
object[] | No | Other agents this agent depends on. Each entry specifies agent_id and output_channel. |
metadata |
object | No | Arbitrary key-value pairs for tagging and organization. Max 10 keys, 256 chars per value. |
Complete Example: Multi-Chain Arbitrage Agent
Below is a production-ready manifest for a complex arbitrage agent that monitors price discrepancies across Ethereum, Base, and Arbitrum, executes atomic swaps, and posts batched proofs.
schema: v2
name: multi-chain-arb-v3
version: 2.1.0
description: "Cross-chain DEX arbitrage with flash loan support"
runtime: deterministic-v2
chains:
- ethereum
- base
- arbitrum
trigger:
type: event
event:
chain: ethereum
contract: "0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D"
signature: "Swap(address,uint256,uint256,uint256,uint256,address)"
confirmations: 1
execution:
enclave: sgx
max_gas: 1200000
timeout: 15s
memory_limit: 512MB
verification:
proof_system: plonky3
post_to: ethereum
frequency: batch
batch_size: 10
permissions:
allowlist:
- "0x7a250d5630B4cF539739dF2C5dAcb4c659F2488D" # Uniswap V2
- "0x2626664c2603336E57B271c5C0b26F421741e481" # Uniswap V3 Base
- "0x1b02dA8Cb0d097eB8D57A175b88c7D8b47997506" # SushiSwap Arb
max_value_per_tx: "5000000000000000000" # 5 ETH
daily_limit: "50000000000000000000" # 50 ETH
secrets:
- FLASH_LOAN_PROVIDER_KEY
- PRICE_FEED_API_TOKEN
dependencies:
- agent_id: agt_price_oracle_01
output_channel: price_updates
- agent_id: agt_gas_estimator
output_channel: gas_estimates
metadata:
team: trading-desk
environment: production
strategy: tri-arb-v3
Manifest Validation
Always validate your manifest before deploying. The CLI performs schema validation, chain compatibility checks, and permission safety analysis.
nexusforge validate agent.config.yaml
Example output for a valid manifest:
✔ Schema valid (v2)
✔ Runtime "deterministic-v2" supported
✔ All chains reachable: ethereum, base, arbitrum
✔ Event trigger contract verified on ethereum
✔ Proof system "plonky3" compatible with target chain
✔ Permission allowlist contracts verified (3/3)
✔ Secret references resolvable (2/2)
✔ Dependencies available and healthy (2/2)
Manifest is valid. Ready to deploy.
If validation fails, the CLI outputs actionable errors:
✘ trigger.event.contract: Address not verified on "ethereum"
Hint: Run `nexusforge contract verify 0x7a25...` or check the address.
✘ permissions.max_value_per_tx: Exceeds network safety cap (10 ETH)
Hint: Reduce to <= 10000000000000000000 or request a limit increase.
Schema Versioning: v1 vs v2
The manifest schema has two versions. New agents should always use v2. The v1 format is deprecated and will be removed in CLI version 3.0.
| Feature | v1 (deprecated) | v2 (current) |
|---|---|---|
| Multi-chain triggers | Not supported | Supported |
| Batch proofs | Not supported | Supported via verification.frequency: batch |
| Agent dependencies | Not supported | Supported via dependencies field |
| Permission model | Flat allowlist only | Allowlist + value caps + daily limits |
| Secrets management | Environment variables | Encrypted vault references |
| Runtime options | deterministic-v1 only |
deterministic-v2, wasm-sandbox |
To migrate a v1 manifest to v2, run:
nexusforge manifest migrate --from v1 --to v2 agent.config.yaml
Agent Lifecycle
Agents move through a series of well-defined states from creation to termination. Understanding the lifecycle is essential for building reliable production systems.
State Machine
Every agent exists in exactly one of the following states at any point in time:
┌──────────┐ deploy ┌───────────┐ ready ┌─────────┐
│ Created │ ──────────────► │ Deploying │ ───────────► │ Running │
└──────────┘ └───────────┘ └────┬────┘
│ │ │
fail │ pause │ │ stop
▼ ▼ │
┌──────────┐ ┌────────┐│
│ Failed │ │ Paused ││
└──────────┘ └───┬────┘│
resume │ │
▼ ▼
┌─────────┐
│ Stopped │
└─────────┘
State Descriptions
| State | Description | Billable |
|---|---|---|
Created |
Manifest uploaded and validated. No compute resources allocated yet. | No |
Deploying |
Compute node assigned, enclave provisioned, code loaded into TEE. Typically takes 10–30 seconds. | No |
Running |
Agent is actively executing on triggers and generating proofs. | Yes |
Paused |
Execution suspended. Enclave memory is preserved, allowing fast resumption. Triggered manually or by health check failures. | Reduced rate |
Stopped |
Agent terminated. A final proof is generated, enclave wiped, and compute resources released. Irreversible. | No |
Failed |
Deployment could not complete. Common causes: enclave attestation failure, invalid manifest, no available nodes. | No |
State Transitions
Transitions can be triggered by the user via CLI or API, or automatically by the protocol.
| Transition | Trigger | CLI Command |
|---|---|---|
| Created → Deploying | User deploys agent | nexusforge deploy |
| Deploying → Running | Enclave attestation succeeds | Automatic |
| Deploying → Failed | Attestation or provisioning fails | Automatic |
| Running → Paused | Manual pause or health check failure | nexusforge pause <agent-id> |
| Paused → Running | User resumes the agent | nexusforge resume <agent-id> |
| Running → Stopped | User stops the agent | nexusforge stop <agent-id> |
| Paused → Stopped | User stops while paused | nexusforge stop <agent-id> |
Health Checks
The protocol continuously monitors agent health. Every 30 seconds, the compute node performs the following checks:
- Heartbeat — Verifies the agent process is alive inside the enclave.
- Memory — Confirms memory usage is below
execution.memory_limit. - Execution latency — Ensures the last execution completed within the configured
timeout. - Proof generation — Validates that proofs are being generated on schedule.
If an agent fails 3 consecutive health checks (90 seconds of unhealthy state), the protocol automatically transitions it to the Paused state. The agent owner is notified via webhook and email.
Restarting Agents
A restart performs a stop followed by a fresh deploy, preserving the agent’s identity and proof history. The enclave is re-provisioned with the latest attested code.
# Restart a running or paused agent
nexusforge restart <agent-id>
# Force restart (skips graceful shutdown, use with caution)
nexusforge restart <agent-id> --force
# Restart with updated environment secrets
nexusforge restart <agent-id> --refresh-secrets
Expected output:
▶ Stopping agent agt_7x9k2m4n...
✔ Final proof generated: prf_m4n5o6
▶ Re-provisioning enclave on node_3f8a...
✔ Enclave attested (SGX, DCAP v3)
✔ Code loaded (sha256: 9c3f...a1b2)
▶ Agent agt_7x9k2m4n is now Running
Restart completed in 18.4s
Zero-Downtime Upgrades (Blue-Green Deploy)
For production agents that cannot tolerate any downtime, NexusForge supports blue-green deployments. A new instance is deployed alongside the existing one, and traffic is switched over only after the new instance passes health checks.
# Deploy new version alongside the current one
nexusforge upgrade <agent-id> --strategy blue-green
# The CLI will:
# 1. Deploy v2 on a new enclave (the "green" instance)
# 2. Run health checks on the green instance
# 3. Redirect triggers to the green instance
# 4. Generate a handoff proof linking both execution histories
# 5. Tear down the old "blue" instance
You can also perform a canary upgrade, routing a percentage of triggers to the new version:
# Route 10% of triggers to v2 for validation
nexusforge upgrade <agent-id> --strategy canary --percentage 10
# Promote to 100% after validation
nexusforge upgrade <agent-id> --promote
# Or roll back if issues are detected
nexusforge upgrade <agent-id> --rollback
Templates
Templates are pre-built, battle-tested agent scaffolds that give you a production-ready starting point. Each template includes a manifest, execution logic, test suite, and deployment configuration.
What Are Templates?
A template is a versioned package containing everything needed to bootstrap a specific type of agent. Templates are maintained by NexusForge Labs and the community, and are published to the NexusForge Template Registry. Each template includes:
- A pre-configured
agent.config.yamlmanifest - Execution logic with best-practice patterns for the use case
- Unit and integration tests
- Environment-specific deployment configs (testnet and mainnet)
- Documentation and architecture notes
Available Templates
| Template | ID | Chains | Description |
|---|---|---|---|
| Cross-Chain Arbitrage | arbitrage |
Ethereum, Base, Arbitrum | Monitors DEX price discrepancies across chains and executes atomic arbitrage with flash loans. |
| Lending Liquidator | liquidator |
Ethereum, Base | Watches lending protocol health factors and triggers liquidations when positions become undercollateralized. |
| DAO Treasury Manager | dao-treasury |
Ethereum | Automates treasury operations: yield rebalancing, diversification, and governance-approved disbursements. |
| Price Oracle | price-oracle |
Ethereum, Solana, Base | Aggregates price feeds from multiple sources and posts verified medianized prices on-chain. |
| Bridge Monitor | bridge-monitor |
Ethereum, Arbitrum, Optimism, Polygon | Monitors cross-chain bridge contracts for anomalous behavior, fund outflows, and proof validation failures. |
Using a Template
Initialize a new agent from a template with the nexusforge init command:
# Create a new arbitrage agent from the template
nexusforge init --template arbitrage my-arb-bot
# Use a specific template version
nexusforge init --template arbitrage@2.1.0 my-arb-bot
# List all available templates
nexusforge template list
Example output:
✔ Template "arbitrage@2.3.1" downloaded
✔ Project scaffolded at ./my-arb-bot
Created files:
my-arb-bot/
├── agent.config.yaml
├── src/
│ ├── index.ts
│ ├── strategy.ts
│ ├── pairs.ts
│ └── flash-loan.ts
├── tests/
│ ├── strategy.test.ts
│ └── e2e.test.ts
├── deploy/
│ ├── testnet.yaml
│ └── mainnet.yaml
└── README.md
Next steps:
cd my-arb-bot
nexusforge dev # Start local development
nexusforge test # Run test suite
Customizing Templates
After scaffolding, you own the code and can modify anything. Common customizations include:
- Adding chains — Add entries to the
chainsarray in the manifest and update your execution logic to handle the new chain. - Changing the trigger — Switch from event-driven to interval-based, or adjust cron schedules.
- Modifying strategy parameters — For arbitrage templates, adjust minimum profit thresholds, slippage tolerance, and gas price ceilings.
- Adding secrets — Store API keys and private configuration in the NexusForge vault and reference them in the manifest.
import { ExecutionContext, ChainClient } from "@nexusforge/sdk";
export const config = {
// Minimum profit threshold in basis points (0.5%)
minProfitBps: 50,
// Maximum slippage tolerance (0.3%)
maxSlippageBps: 30,
// Gas price ceiling in gwei — skip execution if gas is too high
maxGasPriceGwei: 35,
// Pairs to monitor
pairs: [
{ tokenA: "WETH", tokenB: "USDC", dexA: "uniswap-v3", dexB: "sushiswap" },
{ tokenA: "WETH", tokenB: "DAI", dexA: "uniswap-v3", dexB: "curve" },
{ tokenA: "WBTC", tokenB: "WETH", dexA: "uniswap-v3", dexB: "balancer" },
],
};
export async function execute(ctx: ExecutionContext) {
const eth = ctx.chain("ethereum") as ChainClient;
const base = ctx.chain("base") as ChainClient;
for (const pair of config.pairs) {
const priceA = await eth.getPrice(pair.dexA, pair.tokenA, pair.tokenB);
const priceB = await base.getPrice(pair.dexB, pair.tokenA, pair.tokenB);
const spreadBps = Math.abs(priceA - priceB) / Math.min(priceA, priceB) * 10000;
if (spreadBps > config.minProfitBps) {
ctx.log.info(`Arbitrage opportunity: ${pair.tokenA}/${pair.tokenB} spread=${spreadBps}bps`);
await ctx.executeSwap({ pair, priceA, priceB, spreadBps });
}
}
}
Publishing Your Own Template
Share your agent patterns with the community by publishing templates to the registry.
# Prepare your project as a template
nexusforge template init
# Validate the template structure
nexusforge template validate
# Publish to the registry
nexusforge template publish
Templates require a template.yaml metadata file in the project root:
name: my-custom-strategy
version: 1.0.0
description: "Custom mean-reversion strategy across L2 DEXs"
author: "your-nexusforge-username"
license: MIT
tags:
- defi
- arbitrage
- l2
chains:
- base
- arbitrum
- optimism
min_cli_version: "2.4.0"
Debugging
NexusForge provides a comprehensive debugging toolkit for developing, testing, and diagnosing agent issues — from local development mode to production execution replay.
Local Development Mode
Run your agent locally with a simulated enclave and forked chain state. Local mode provides instant feedback without deploying to the network.
# Start in dev mode with verbose logging
nexusforge dev --verbose
# Dev mode with a specific chain fork
nexusforge dev --fork ethereum --block 18294000
# Dev mode with simulated event triggers
nexusforge dev --trigger-file triggers.json
In verbose mode, every execution step is logged with timing information:
[2026-04-16T12:00:00.000Z] INFO Agent started in dev mode
[2026-04-16T12:00:00.012Z] DEBUG Forking ethereum at block 18294000
[2026-04-16T12:00:00.450Z] DEBUG Chain state loaded (482 accounts cached)
[2026-04-16T12:00:01.001Z] INFO Trigger fired: interval (30s)
[2026-04-16T12:00:01.003Z] DEBUG Entering execute()
[2026-04-16T12:00:01.045Z] DEBUG getPrice(uniswap-v3, WETH, USDC) = 3,241.87
[2026-04-16T12:00:01.062Z] DEBUG getPrice(sushiswap, WETH, USDC) = 3,238.14
[2026-04-16T12:00:01.063Z] INFO Spread detected: 11.5bps (below threshold 50bps)
[2026-04-16T12:00:01.064Z] DEBUG Execution complete (61ms, no action taken)
[2026-04-16T12:00:01.070Z] DEBUG Proof generated (simulated): prf_dev_a1b2c3
Log Levels
Agents support four log levels, configurable via the CLI or manifest. In production, logs are streamed to the NexusForge dashboard and can be exported to external sinks.
| Level | Usage | Production Default |
|---|---|---|
DEBUG |
Detailed execution traces, variable values, internal state. Use for development only. | Off |
INFO |
Key execution events: triggers fired, actions taken, proofs generated. | On |
WARN |
Non-critical issues: high gas prices causing skipped executions, approaching rate limits. | On |
ERROR |
Execution failures: reverted transactions, proof generation failures, chain RPC errors. | On |
# Stream live logs from a deployed agent
nexusforge logs <agent-id> --follow
# Filter by level
nexusforge logs <agent-id> --level error
# Export logs for a time range
nexusforge logs <agent-id> --from 2026-04-15T00:00:00Z --to 2026-04-16T00:00:00Z --output logs.json
Replaying Executions
Every agent execution is deterministic, meaning you can replay any historical execution locally to inspect what happened. The replay uses the exact same chain state, inputs, and randomness seed as the original.
# Replay a specific execution
nexusforge replay <execution-id>
# Replay with step-by-step debugging
nexusforge replay <execution-id> --step
# Replay and compare output against the original proof
nexusforge replay <execution-id> --verify
The --step flag pauses at each execution step and lets you inspect state:
Step 1/7: getPrice(uniswap-v3, WETH, USDC)
Result: 3241.87
Gas: 24,000
[n]ext [i]nspect [c]ontinue [q]uit > i
Local state:
pair: WETH/USDC
priceA: 3241.87
priceB: (pending)
spreadBps: (pending)
[n]ext [i]nspect [c]ontinue [q]uit > n
Inspecting Proofs
Examine the contents and validity of any proof generated by your agent or any other agent on the network.
nexusforge proof inspect <proof-id>
Example output:
Proof: prf_a1b2c3
Agent: agt_7x9k2m4n (multi-chain-arb-v3)
Execution: exec_d4e5f6
System: plonky3
Status: ✔ Verified on-chain
Inputs:
Block (ETH): 18,294,017
Block (Base): 9,412,088
Trigger: Swap event on 0x7a25...88D
Timestamp: 2026-04-16T11:59:30Z
Outputs:
Actions: 2 swaps executed
Gas used: 284,000
Value moved: 1.24 ETH
Chain anchor:
Chain: ethereum
TX: 0x8b4c...1d2e
Block: 18,294,019
Confirmations: 847
Recursive proof chain:
prf_a1b2c3 ← prf_z9y8x7 ← prf_w6v5u4 (root)
Common Errors
| Error Code | Message | Cause | Fix |
|---|---|---|---|
E1001 |
ENCLAVE_ATTESTATION_FAILED | The compute node’s TEE could not be verified by the attestation service. | Retry deployment. If persistent, the node’s hardware may have a firmware issue. Try --node <different-node-id>. |
E2001 |
EXECUTION_TIMEOUT | Agent execution exceeded the configured timeout. |
Optimize your execution logic or increase execution.timeout in the manifest (max 5m). |
E2002 |
OUT_OF_MEMORY | Agent exceeded execution.memory_limit. |
Profile memory usage locally with nexusforge dev --profile memory. Increase the limit or reduce data held in memory. |
E3001 |
PROOF_GENERATION_FAILED | The prover could not generate a valid proof for the execution trace. | Check for non-deterministic operations (timestamps, random values) in your code. Use ctx.deterministicRandom() instead. |
E4001 |
CHAIN_RPC_UNREACHABLE | Unable to connect to the target chain’s RPC endpoint. | Check chain status on the NexusForge Status Page. The issue is usually transient; the agent will auto-retry. |
E4002 |
TRANSACTION_REVERTED | An on-chain transaction submitted by the agent was reverted. | Replay the execution with nexusforge replay <exec-id> --step to identify the revert reason. Common causes: insufficient allowance, slippage exceeded, or stale price data. |
Web Debugger Dashboard
The NexusForge Dashboard includes a visual debugger for production agents. Access it at https://app.nexusforge.io/agents/<agent-id>/debug.
- Execution Timeline — Visual timeline of all executions with status indicators (success, skipped, error).
- State Inspector — Browse the agent’s internal state at any point in its execution history.
- Proof Explorer — Navigate the recursive proof chain with an interactive graph visualization.
- Log Viewer — Searchable, filterable log stream with level and time-range controls.
- Gas Profiler — Breakdown of gas consumption per execution step, with optimization suggestions.
- Alert Configuration — Set up alerts for specific error codes, execution duration thresholds, or proof failures.
# Open the web debugger for an agent
nexusforge debug <agent-id> --open
# Launch the local debugger UI (for dev mode)
nexusforge dev --debugger
Node Requirements
Compute nodes provide the secure hardware that runs NexusForge agents. Operators must meet strict hardware and software requirements to ensure execution integrity and network reliability.
Minimum Hardware Specifications
All nodes must meet or exceed the following specifications to join the network. Nodes that fall below these thresholds will fail the onboarding attestation check.
| Component | Minimum Requirement | Notes |
|---|---|---|
| CPU | 8+ cores, Intel SGX or AMD SEV capable | Intel Xeon E-2300 series or AMD EPYC 7003 series or newer. Must support hardware-based TEE. |
| RAM | 32 GB DDR4 ECC | ECC memory required for deterministic execution guarantees. 16 GB reserved for enclave use. |
| Storage | 2 TB NVMe SSD | Minimum 3,000 MB/s sequential read. Used for chain state caching and proof storage. |
| Network | 100 Mbps symmetric | Stable, low-latency connection required. Maximum acceptable latency to NexusForge relayers: 150ms. |
| UPS / Power | 30-minute battery backup | Required to generate final proofs during unexpected power loss. |
Recommended Specifications (High-Performance Tier)
Nodes that exceed minimum specs are eligible for the high-performance tier, which receives priority agent assignments and higher reward multipliers (1.5x).
| Component | Recommended |
|---|---|
| CPU | 16+ cores, Intel Xeon w5-3400 or AMD EPYC 9004 series |
| RAM | 128 GB DDR5 ECC |
| Storage | 4 TB NVMe SSD (PCIe Gen5, 7,000+ MB/s read) |
| Network | 1 Gbps symmetric, <50ms latency to major cloud regions |
| GPU (optional) | NVIDIA H100 or A100 for accelerated proof generation |
Supported TEE Platforms
NexusForge requires hardware-level Trusted Execution Environments. The following platforms are supported:
| Platform | Status | Attestation | Notes |
|---|---|---|---|
| Intel SGX v2 | Stable | DCAP (Data Center Attestation Primitives) | Recommended for most operators. Widest hardware availability. |
| AMD SEV-SNP | Stable | VCEK-based remote attestation | Preferred for high-memory workloads (no EPC size limitations). |
| ARM TrustZone | Beta | PSA Certified attestation | Currently limited to edge use cases. Not recommended for mainnet agents. |
Supported Operating Systems
- Ubuntu 22.04 LTS (Jammy Jellyfish) — Primary supported OS. All development and testing is done here.
- Debian 12 (Bookworm) — Fully supported. Recommended for operators who prefer Debian’s stability policies.
Other Linux distributions may work but are not officially supported. Windows and macOS are not supported for node operation (use them for development only via nexusforge dev).
Port Requirements
The following ports must be open and accessible. Configure your firewall and router accordingly.
| Port | Protocol | Direction | Purpose |
|---|---|---|---|
8545 |
TCP | Inbound | JSON-RPC endpoint for agent management and local chain interaction. |
30303 |
TCP/UDP | Both | Peer-to-peer communication with other NexusForge nodes for proof gossip and state sync. |
9090 |
TCP | Inbound | Prometheus metrics endpoint. Used by the NexusForge monitoring infrastructure and your own dashboards. |
443 |
TCP | Outbound | HTTPS connections to NexusForge API, attestation services, and chain RPC endpoints. |
Verifying Your Setup
Run the built-in hardware check before registering your node:
nexusforge node check
Example output for a qualifying node:
✔ CPU: Intel Xeon E-2388G (8 cores, 16 threads, SGX v2)
✔ RAM: 64 GB DDR4 ECC (32 GB available for enclaves)
✔ Storage: 3.8 TB NVMe (Samsung 990 Pro, 3,500 MB/s read)
✔ Network: 412 Mbps down / 398 Mbps up (latency: 23ms to us-east-1)
✔ TEE: Intel SGX v2, DCAP driver v1.17
✔ OS: Ubuntu 22.04.4 LTS (kernel 6.5.0-44-generic)
✔ Ports: 8545 ✔ 30303 ✔ 9090 ✔ 443 ✔
Result: ELIGIBLE (high-performance tier)
Estimated reward multiplier: 1.5x
Staking
Compute node operators must stake NXF tokens to participate in the network. Staking provides economic security by aligning operator incentives with honest execution.
Staking Requirement
The minimum stake to operate a compute node is 32 NXF. This stake is locked in the NexusForge staking contract and serves as collateral for honest behavior.
| Tier | Minimum Stake | Max Concurrent Agents | Reward Multiplier |
|---|---|---|---|
| Standard | 32 NXF | 10 | 1.0x |
| Professional | 128 NXF | 50 | 1.25x |
| Enterprise | 512 NXF | Unlimited | 1.5x |
Staking Contract
The NXF staking contract is deployed on Ethereum mainnet:
Contract: 0x4E2a6fBc6c47F5E8C2e7032B8C4e2F3aD1bC9e7A
Network: Ethereum Mainnet
Standard: ERC-4626 (Tokenized Vault)
Audited: Trail of Bits (March 2026), OpenZeppelin (January 2026)
How to Stake
# Stake the minimum amount (32 NXF)
nexusforge node stake --amount 32
# Stake a custom amount
nexusforge node stake --amount 128
# Check your current stake
nexusforge node stake --status
Example staking flow:
▶ Checking NXF balance... 250.00 NXF available
▶ Approving staking contract for 32.00 NXF...
✔ Approval TX: 0x1a2b...3c4d (confirmed in block 19,481,002)
▶ Staking 32.00 NXF...
✔ Stake TX: 0x5e6f...7g8h (confirmed in block 19,481,003)
Staking complete.
Amount: 32.00 NXF
Tier: Standard
Node ID: node_3f8a
Max agents: 10
Status: Active
Unstaking
Unstaking initiates a 7-day cooldown period during which your tokens remain locked. This cooldown exists to ensure any pending proofs can be verified and any disputes can be resolved before the operator exits.
# Request unstake (starts 7-day cooldown)
nexusforge node unstake --amount 32
# Check cooldown status
nexusforge node unstake --status
# Withdraw after cooldown completes
nexusforge node withdraw
Important: You cannot unstake below the minimum threshold for your active agent count. If you have 5 active agents, you must maintain at least 32 NXF staked. Stop agents first, then unstake.
Slashing Conditions
Staked tokens are subject to slashing if the operator violates protocol rules. Slashing is automated and enforced by the staking contract based on on-chain evidence.
| Violation | Slash Amount | Detection Method | Appeals |
|---|---|---|---|
| Failed proof — Agent produces an execution that cannot generate a valid proof. | 0.01 NXF | Automatic on-chain proof verification failure. | Auto-forgiven if <3 occurrences per 30 days. |
| Extended downtime — Node is unreachable for more than 4 consecutive hours. | 0.1 NXF | P2P heartbeat monitoring by peer nodes. | Can appeal within 48h if caused by network-wide outage. |
| Malicious behavior — Attempting to forge proofs, tamper with execution, or submit fraudulent attestations. | Full stake | Fraud proofs submitted by verifiers or other nodes. | Reviewed by the NexusForge Security Council. 7-day dispute window. |
| Stale attestation — Node continues operating with expired or revoked TEE attestation. | 0.5 NXF | Attestation validity checked every 24 hours. | Grace period of 72 hours to renew attestation before slash. |
Delegation
NXF token holders who do not want to operate a node can delegate their tokens to an existing node operator. Delegators earn a share of the operator’s rewards proportional to their delegation amount, minus the operator’s commission.
# Delegate to a node operator
nexusforge delegate --to node_3f8a --amount 64
# Check delegation status and earned rewards
nexusforge delegate --status
# Claim accumulated rewards
nexusforge delegate --claim
# Undelegate (subject to 7-day cooldown)
nexusforge delegate --undelegate --amount 64
Operators set their commission rate when registering their node. The default commission is 10% of delegator rewards.
# Set operator commission rate (as a node operator)
nexusforge node config --commission 10
# View operator details including commission and total delegations
nexusforge node info node_3f8a
Example output:
Node: node_3f8a
Operator: 0x9a8b...7c6d
Status: Active
TEE: Intel SGX v2 (attested 2026-04-14T08:00:00Z)
Tier: Professional (128 NXF staked)
Commission: 10%
Total delegated: 2,048 NXF (from 47 delegators)
Active agents: 23 / 50
Uptime (30d): 99.94%
Slashing events: 0
Rewards
Node operators earn NXF rewards for providing reliable compute to the NexusForge network. Rewards are distributed automatically every epoch and accrue proportionally to each operator's staked amount, uptime, and execution quality.
Base Reward Rate
All staked NXF earns a base annual percentage yield of 15% APY. This rate is protocol-governed and may be adjusted through NXF governance proposals. The base rate applies uniformly to all operators who meet the minimum uptime threshold of 95%.
Performance Bonus
Operators who consistently maintain high availability are eligible for a performance bonus of up to 5% additional APY, bringing the maximum effective rate to 20% APY. The bonus is calculated using the following tiers:
| Uptime | Bonus APY | Effective APY |
|---|---|---|
| < 95% | 0% (slashing risk) | 0% |
| 95% – 98% | +0% | 15% |
| 98% – 99% | +1.5% | 16.5% |
| 99% – 99.9% | +3% | 18% |
| > 99.9% | +5% | 20% |
Epoch & Distribution
Rewards are distributed once per epoch, which lasts 6 hours (4 epochs per day). At the end of each epoch, the protocol calculates each operator's share of the reward pool based on their stake weight and performance score. Rewards are deposited directly into the operator's on-chain reward balance and can be claimed at any time.
Reward Calculation Formula
The per-epoch reward for a given node is computed as follows:
epoch_reward = (stake * base_rate * performance_multiplier) / epochs_per_year
where:
stake = amount of NXF staked by the operator
base_rate = 0.15 (15% APY)
performance_multiplier = 1.0 + bonus_rate (e.g. 1.0333 for 99.5% uptime)
epochs_per_year = 1460 (4 epochs/day * 365 days)
For example, a node with 50,000 NXF staked and 99.95% uptime (5% bonus) would earn:
(50000 * 0.15 * 1.0333) / 1460 = 5.308 NXF per epoch
= ~21.23 NXF per day
= ~7,750 NXF per year
Claiming Rewards
Accumulated rewards can be claimed at any time using the CLI:
nexusforge node rewards claim
You can also check your pending balance before claiming:
nexusforge node rewards balance
Pending rewards: 127.42 NXF
Last claim: 2026-04-15T18:00:00Z
Next epoch: 2026-04-16T00:00:00Z (in 2h 14m)
Historical Rewards
The following table shows reward distributions for the last four epochs for a reference node (50,000 NXF staked, 99.97% uptime):
| Epoch | Timestamp | Uptime | Executions | Reward (NXF) |
|---|---|---|---|---|
| #184,201 | 2026-04-16 06:00 UTC | 99.98% | 3,412 | 5.312 |
| #184,200 | 2026-04-16 00:00 UTC | 99.97% | 3,387 | 5.308 |
| #184,199 | 2026-04-15 18:00 UTC | 100.00% | 3,501 | 5.319 |
| #184,198 | 2026-04-15 12:00 UTC | 99.95% | 3,290 | 5.305 |
Hardware Setup
This guide walks you through setting up a NexusForge compute node from bare metal to a fully registered, attestation-verified operator earning rewards on mainnet.
Step 1: Enable SGX in BIOS
NexusForge nodes require Intel SGX (Software Guard Extensions) for Trusted Execution Environment support. To enable SGX:
- Reboot your machine and enter BIOS/UEFI setup (typically
F2,Del, orF12during POST). - Navigate to Security or Advanced → CPU Configuration.
- Set Intel SGX to Enabled (not "Software Controlled").
- Set SGX Reserved Memory Size to at least 128 MB (256 MB recommended).
- Save and exit BIOS. The system will reboot.
After rebooting, verify SGX is active:
dmesg | grep -i sgx
# Expected: sgx: EPC section ... (should show memory regions)
Step 2: Install Node Software
Run the official install script to set up the NexusForge node daemon, CLI tools, and required dependencies:
curl -sSL https://install.nexusforge.io | bash
The installer will:
- Detect your CPU architecture and SGX support level.
- Install the
nexusforgeCLI andnxf-nodedaemon. - Install the Intel SGX SDK and Platform Software (PSW).
- Generate a node keypair stored at
~/.nexusforge/node_key.json. - Create a systemd service
nxf-node.servicefor automatic restarts.
Verify the installation:
nexusforge --version
# nexusforge-cli 0.9.2 (build 2026-04-01)
nexusforge node status
# Status: initialized (not yet registered)
Step 3: Configure the Node
Edit the node configuration file at ~/.nexusforge/node.config.yaml:
node:
name: "my-node-us-east"
operator_address: "0xYourWalletAddressHere"
region: "us-east-1"
network:
rpc_endpoints:
ethereum: "https://eth-mainnet.g.alchemy.com/v2/YOUR_KEY"
base: "https://mainnet.base.org"
solana: "https://api.mainnet-beta.solana.com"
listen_port: 9100
p2p_port: 9101
tee:
type: sgx
enclave_size: "256MB"
attestation_provider: "intel-dcap"
quote_refresh_interval: "6h"
storage:
data_dir: "/var/lib/nexusforge"
proof_cache_size: "10GB"
log_level: "info"
metrics:
enabled: true
prometheus_port: 9102
Step 4: Register On-Chain
Register your node with the NexusForge protocol contract. This requires a minimum stake of 10,000 NXF:
nexusforge node register \
--stake 50000 \
--operator 0xYourWalletAddressHere
Registering node on-chain...
Tx: 0x8b4e...1f2a
Node: node_3f8a7c
Stake: 50,000 NXF
Status: registered (pending attestation)
Step 5: Verify Attestation
Before your node can receive agent workloads, its TEE environment must pass remote attestation:
nexusforge node attest
Generating SGX quote...
Submitting attestation to DCAP verifier...
✔ Attestation verified
MRENCLAVE: 0x4a7b...8e2f
MRSIGNER: 0x1c3d...5a6b
TCB Status: UpToDate
Node: node_3f8a7c
Status: active (ready for workloads)
Step 6: Start the Node
sudo systemctl enable nxf-node
sudo systemctl start nxf-node
# Check status
nexusforge node status
Node: node_3f8a7c
Status: active
Uptime: 0h 2m
Agents: 0 / 50 (capacity)
Stake: 50,000 NXF
Attestation: valid (expires in 5h 58m)
Step 7: Monitoring with Grafana
NexusForge nodes expose Prometheus-compatible metrics on the configured prometheus_port. To set up a Grafana dashboard:
- Ensure Prometheus is scraping
localhost:9102/metrics. - Import the official NexusForge Grafana dashboard (ID
19847) from grafana.com. - The dashboard includes panels for: uptime percentage, active agents, proof generation latency, epoch rewards, memory/CPU utilization, and attestation status.
scrape_configs:
- job_name: "nexusforge-node"
static_configs:
- targets: ["localhost:9102"]
scrape_interval: 15s
Compute API
Interact with compute nodes on the NexusForge network. Query node availability, inspect performance metrics, and delegate stake programmatically.
/nodes
List all registered compute nodes. Supports filtering by status, region, and min_stake query parameters.
{
"data": [
{
"id": "node_3f8a7c",
"name": "my-node-us-east",
"operator": "0x1a2b...3c4d",
"status": "active",
"region": "us-east-1",
"stake": 50000,
"uptime": 99.97,
"agents_active": 12,
"capacity": 50,
"tee": "sgx",
"registered_at": "2026-03-01T08:00:00Z"
},
{
"id": "node_9e1b2d",
"name": "validator-eu-west",
"operator": "0x5e6f...7a8b",
"status": "active",
"region": "eu-west-1",
"stake": 125000,
"uptime": 99.99,
"agents_active": 38,
"capacity": 50,
"tee": "sgx",
"registered_at": "2026-01-15T14:30:00Z"
}
],
"pagination": {
"total": 847,
"cursor": "eyJpZCI6Mn0=",
"has_more": true
}
}
/nodes/:id
Get detailed information about a specific compute node, including its configuration, attestation status, and current workload.
{
"id": "node_3f8a7c",
"name": "my-node-us-east",
"operator": "0x1a2b...3c4d",
"status": "active",
"region": "us-east-1",
"stake": 50000,
"delegated_stake": 23500,
"total_stake": 73500,
"uptime": 99.97,
"agents_active": 12,
"capacity": 50,
"tee": {
"type": "sgx",
"mrenclave": "0x4a7b...8e2f",
"attestation_status": "valid",
"attestation_expires": "2026-04-16T12:00:00Z"
},
"rewards": {
"total_earned": 7842.16,
"pending": 127.42,
"last_epoch": "#184,201"
},
"registered_at": "2026-03-01T08:00:00Z"
}
/nodes/:id/metrics
Retrieve performance metrics for a node over a specified time range. Defaults to the last 24 hours. Use from and to query parameters for custom ranges.
{
"node_id": "node_3f8a7c",
"period": {
"from": "2026-04-15T12:00:00Z",
"to": "2026-04-16T12:00:00Z"
},
"uptime_pct": 99.97,
"total_executions": 13589,
"proofs_generated": 13589,
"proofs_verified": 13589,
"avg_proof_latency_ms": 142,
"p99_proof_latency_ms": 318,
"total_gas_used": 3842910000,
"epochs_completed": 4,
"rewards_earned": 21.244
}
/nodes/:id/delegate
Delegate NXF stake to a node operator. Delegated stake earns a proportional share of the node's rewards minus the operator's commission rate.
{
"amount": 10000,
"delegator": "0x9c8d...7e6f"
}
{
"delegation_id": "del_k4m2n8",
"node_id": "node_3f8a7c",
"delegator": "0x9c8d...7e6f",
"amount": 10000,
"commission_rate": 0.10,
"effective_epoch": "#184,202",
"created_at": "2026-04-16T12:15:00Z"
}
Proofs API
Query, verify, and aggregate zero-knowledge proofs generated by NexusForge agents. All proofs use the Plonky3 proving system and can be verified both on-chain and off-chain.
/proofs/:id
Retrieve the full details of a specific proof, including the raw proof data, public inputs, and verification key.
{
"proof_id": "prf_a1b2c3",
"agent_id": "agt_7x9k2m4n",
"node_id": "node_3f8a7c",
"status": "verified",
"system": "plonky3",
"block": 18294017,
"tx_hash": "0x7a3f...c912",
"proof_data": {
"commitments": [
"0x0a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d",
"0x6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b"
],
"evaluations": [
"0x2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f",
"0x8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d"
],
"opening_proof": "0x4e5f6a7b8c9d0e1f...a3b4c5d6e7f8a9b"
},
"public_inputs": [
"0x00000000000000000000000000000001",
"0x7a3fc912e4b8d5a16f2c0e9b3d7a4f8c",
"0x00000000000000000000000000077359"
],
"verification_key": {
"circuit_digest": "0x1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c",
"num_public_inputs": 3,
"constants_sigmas_cap": "0x7d8e9f0a1b2c3d4e..."
},
"gas_used": 284000,
"verified_at": "2026-04-16T11:59:32Z",
"created_at": "2026-04-16T11:59:30Z"
}
/proofs/:id/verify
Verify a proof off-chain using the NexusForge verification service. This is useful for checking proof validity without incurring gas costs. The verification uses the same circuit logic as the on-chain verifier contract.
{
"proof_id": "prf_a1b2c3",
"valid": true,
"verification_time_ms": 89,
"verifier_version": "plonky3-v0.4.1",
"public_inputs_match": true,
"circuit_hash": "0x1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c"
}
/proofs/aggregate
Retrieve an aggregated proof that combines multiple individual proofs into a single succinct proof for batch verification. Pass proof IDs via the proof_ids query parameter (comma-separated) or a block_range.
{
"aggregate_id": "agg_x7y8z9",
"proof_count": 128,
"block_range": {
"from": 18293900,
"to": 18294017
},
"aggregate_proof": "0x9a8b7c6d5e4f3a2b...",
"aggregate_verification_key": "0x1c2d3e4f5a6b7c8d...",
"compression_ratio": "128:1",
"verification_time_ms": 210,
"created_at": "2026-04-16T12:00:05Z"
}
Proof Status Codes
Every proof object includes a status field indicating its position in the verification lifecycle:
| Status | Description |
|---|---|
generating |
Proof is being computed inside the TEE. Typically takes 100–500 ms. |
pending |
Proof has been generated and is awaiting on-chain submission. |
submitted |
Proof transaction has been broadcast but not yet confirmed. |
verified |
Proof has been verified by the on-chain verifier contract. |
failed |
Proof verification failed. The associated node may be subject to slashing. |
expired |
Proof was not submitted within the epoch window and is no longer valid. |
Webhooks
Webhooks allow your application to receive real-time notifications when events occur on the NexusForge protocol. Instead of polling the API, you register an HTTPS endpoint and NexusForge pushes event payloads to it automatically.
Supported Events
| Event | Description |
|---|---|
agent.started |
An agent has been deployed and begun execution on a compute node. |
agent.stopped |
An agent has been stopped, either manually or due to resource limits. |
agent.error |
An agent encountered a runtime error during execution. |
proof.generated |
A new zk-proof has been generated for an agent execution. |
proof.verified |
A proof has been successfully verified on-chain. |
node.slashed |
A compute node has been slashed for misbehavior or downtime. |
Creating a Webhook
/webhooks
Register a new webhook endpoint to receive event notifications.
{
"url": "https://your-app.com/api/nexusforge/webhook",
"events": [
"agent.started",
"agent.stopped",
"proof.verified"
],
"secret": "whsec_your_signing_secret_here"
}
{
"id": "wh_p3q4r5",
"url": "https://your-app.com/api/nexusforge/webhook",
"events": [
"agent.started",
"agent.stopped",
"proof.verified"
],
"status": "active",
"created_at": "2026-04-16T12:00:00Z"
}
Payload Format
Each webhook delivery includes the following headers and JSON body:
Content-Type: application/json
X-NexusForge-Event: proof.verified
X-NexusForge-Delivery: del_8a7b6c5d
X-NexusForge-Signature: sha256=a1b2c3d4e5f6...
X-NexusForge-Timestamp: 1713264000
{
"id": "evt_m9n8o7",
"type": "proof.verified",
"created_at": "2026-04-16T12:00:00Z",
"data": {
"proof_id": "prf_a1b2c3",
"agent_id": "agt_7x9k2m4n",
"block": 18294017,
"tx_hash": "0x7a3f...c912",
"verified": true
}
}
Signature Verification
Every webhook payload is signed with your webhook secret using HMAC-SHA256. Always verify the signature before processing the payload to ensure it originated from NexusForge.
The signature is computed over the raw request body concatenated with the timestamp header:
signature = HMAC-SHA256(secret, timestamp + "." + raw_body)
Example Webhook Handler
import express from "express";
import crypto from "crypto";
const app = express();
app.use(express.raw({ type: "application/json" }));
const WEBHOOK_SECRET = process.env.NXF_WEBHOOK_SECRET!;
app.post("/api/nexusforge/webhook", (req, res) => {
const signature = req.headers["x-nexusforge-signature"] as string;
const timestamp = req.headers["x-nexusforge-timestamp"] as string;
// Verify signature
const expected = crypto
.createHmac("sha256", WEBHOOK_SECRET)
.update(`${timestamp}.${req.body}`)
.digest("hex");
if (signature !== `sha256=${expected}`) {
return res.status(401).json({ error: "Invalid signature" });
}
// Reject stale deliveries (> 5 minutes old)
const age = Date.now() / 1000 - parseInt(timestamp);
if (age > 300) {
return res.status(408).json({ error: "Timestamp too old" });
}
const event = JSON.parse(req.body.toString());
console.log(`Received ${event.type}:`, event.data);
// Handle specific events
switch (event.type) {
case "proof.verified":
console.log(`Proof ${event.data.proof_id} verified at block ${event.data.block}`);
break;
case "agent.error":
console.error(`Agent ${event.data.agent_id} error:`, event.data.message);
break;
}
res.status(200).json({ received: true });
});
app.listen(3000);
Retry Policy
If your endpoint returns a non-2xx status code or does not respond within 10 seconds, NexusForge will retry the delivery up to 3 times with exponential backoff:
- Retry 1: 30 seconds after initial failure
- Retry 2: 2 minutes after retry 1
- Retry 3: 15 minutes after retry 2
After all retries are exhausted, the webhook is marked as failed and a notification is sent to the account email. Webhooks that fail consistently for 24 hours are automatically disabled.
TypeScript SDK
The official TypeScript SDK provides a type-safe client for the NexusForge API. It supports Node.js 18+ and modern browser environments.
Installation
npm install @nexusforge/sdk
Initialize the Client
import { NexusForge } from "@nexusforge/sdk";
const nxf = new NexusForge({
apiKey: process.env.NXF_API_KEY!,
network: "mainnet", // or "testnet"
});
Create & Deploy an Agent
const agent = await nxf.agents.create({
name: "price-oracle-agent",
runtime: "deterministic-v2",
chains: ["ethereum", "base"],
trigger: { type: "interval", every: "30s" },
execution: { enclave: "sgx", maxGas: 500_000 },
verification: { proofSystem: "plonky3", postTo: "ethereum" },
});
console.log(`Agent deployed: ${agent.id}`);
Listen for Events
const stream = nxf.agents.events(agent.id);
stream.on("proof.verified", (event) => {
console.log(`Proof verified at block ${event.block}`);
console.log(`Tx: ${event.txHash}`);
});
stream.on("agent.error", (event) => {
console.error(`Agent error: ${event.message}`);
});
// Clean up when done
stream.close();
Verify a Proof
const result = await nxf.proofs.verify("prf_a1b2c3");
if (result.valid) {
console.log(`Proof is valid (${result.verificationTimeMs}ms)`);
} else {
console.error("Proof verification failed");
}
Full Example: End-to-End Agent Deployment
import { NexusForge } from "@nexusforge/sdk";
async function main() {
const nxf = new NexusForge({
apiKey: process.env.NXF_API_KEY!,
network: "mainnet",
});
// Deploy the agent
const agent = await nxf.agents.create({
name: "defi-rebalancer",
runtime: "deterministic-v2",
chains: ["ethereum", "base", "arbitrum"],
trigger: { type: "interval", every: "60s" },
execution: { enclave: "sgx", maxGas: 750_000 },
verification: { proofSystem: "plonky3", postTo: "ethereum" },
});
console.log(`Agent ${agent.id} deployed to ${agent.node.id}`);
// Wait for first execution and verify
const proofs = await nxf.agents.proofs(agent.id, { limit: 1 });
const proof = proofs.data[0];
const verification = await nxf.proofs.verify(proof.proofId);
console.log(`First proof ${proof.proofId}: valid=${verification.valid}`);
// Check node metrics
const metrics = await nxf.nodes.metrics(agent.node.id);
console.log(`Node uptime: ${metrics.uptimePct}%`);
console.log(`Proof latency: ${metrics.avgProofLatencyMs}ms`);
// Set up monitoring
const stream = nxf.agents.events(agent.id);
stream.on("proof.verified", (e) => console.log(`Block ${e.block} verified`));
stream.on("agent.error", (e) => console.error(`Error: ${e.message}`));
}
main().catch(console.error);
Python SDK
The NexusForge Python SDK provides both synchronous and asynchronous clients for the NexusForge API. Requires Python 3.9+.
Installation
pip install nexusforge
Initialize the Client
from nexusforge import NexusForge
nxf = NexusForge(
api_key="nxf_sk_live_abc123def456",
network="mainnet",
)
Create & Deploy an Agent
agent = nxf.agents.create(
name="price-oracle-agent",
runtime="deterministic-v2",
chains=["ethereum", "base"],
trigger={"type": "interval", "every": "30s"},
execution={"enclave": "sgx", "max_gas": 500_000},
verification={"proof_system": "plonky3", "post_to": "ethereum"},
)
print(f"Agent deployed: {agent.id}")
print(f"Status: {agent.status}")
Check Agent Status
agent = nxf.agents.get("agt_7x9k2m4n")
print(f"Status: {agent.status}")
print(f"Uptime: {agent.uptime}")
print(f"Executions: {agent.executions}")
# List recent proofs
proofs = nxf.agents.proofs("agt_7x9k2m4n", limit=5)
for proof in proofs.data:
print(f" {proof.proof_id}: block={proof.block}, verified={proof.verified}")
Async Support
For high-throughput applications, the SDK provides an async client built on asyncio and httpx:
import asyncio
from nexusforge import AsyncNexusForge
async def main():
nxf = AsyncNexusForge(
api_key="nxf_sk_live_abc123def456",
network="mainnet",
)
# Deploy agent
agent = await nxf.agents.create(
name="async-oracle",
runtime="deterministic-v2",
chains=["ethereum"],
trigger={"type": "interval", "every": "15s"},
execution={"enclave": "sgx", "max_gas": 300_000},
verification={"proof_system": "plonky3", "post_to": "ethereum"},
)
# Concurrently fetch status and proofs
status, proofs = await asyncio.gather(
nxf.agents.get(agent.id),
nxf.agents.proofs(agent.id, limit=10),
)
print(f"Agent {status.id}: {status.status}")
print(f"Proofs generated: {len(proofs.data)}")
await nxf.close()
asyncio.run(main())
Full Example
from nexusforge import NexusForge
def main():
nxf = NexusForge(api_key="nxf_sk_live_abc123def456", network="mainnet")
# Deploy
agent = nxf.agents.create(
name="defi-monitor",
runtime="deterministic-v2",
chains=["ethereum", "base", "arbitrum"],
trigger={"type": "interval", "every": "60s"},
execution={"enclave": "sgx", "max_gas": 750_000},
verification={"proof_system": "plonky3", "post_to": "ethereum"},
)
print(f"Deployed {agent.id} on node {agent.node.id}")
# Verify latest proof
proofs = nxf.agents.proofs(agent.id, limit=1)
if proofs.data:
result = nxf.proofs.verify(proofs.data[0].proof_id)
print(f"Proof valid: {result.valid} ({result.verification_time_ms}ms)")
# Check node health
metrics = nxf.nodes.metrics(agent.node.id)
print(f"Node uptime: {metrics.uptime_pct}%")
print(f"Avg proof latency: {metrics.avg_proof_latency_ms}ms")
if __name__ == "__main__":
main()
Rust SDK
The NexusForge Rust SDK provides a high-performance, strongly-typed client for the NexusForge API. Built on tokio and reqwest for async I/O.
Note: The Rust SDK is currently in beta. API surface may change between minor versions. Please report issues on GitHub.
Installation
Add the following to your Cargo.toml:
[dependencies]
nexusforge = "0.4"
tokio = { version = "1", features = ["full"] }
Initialize the Client
use nexusforge::NexusForge;
#[tokio::main]
async fn main() -> Result<(), nexusforge::Error> {
let nxf = NexusForge::builder()
.api_key("nxf_sk_live_abc123def456")
.network(nexusforge::Network::Mainnet)
.build()?;
Ok(())
}
Create & Deploy an Agent
use nexusforge::{NexusForge, AgentConfig, Trigger, Execution, Verification};
#[tokio::main]
async fn main() -> Result<(), nexusforge::Error> {
let nxf = NexusForge::builder()
.api_key("nxf_sk_live_abc123def456")
.network(nexusforge::Network::Mainnet)
.build()?;
let config = AgentConfig::builder()
.name("price-oracle-agent")
.runtime("deterministic-v2")
.chains(vec!["ethereum", "base"])
.trigger(Trigger::Interval { every: "30s".into() })
.execution(Execution {
enclave: "sgx".into(),
max_gas: 500_000,
})
.verification(Verification {
proof_system: "plonky3".into(),
post_to: "ethereum".into(),
})
.build()?;
let agent = nxf.agents().create(config).await?;
println!("Agent deployed: {}", agent.id);
// Verify the first proof
let proofs = nxf.agents().proofs(&agent.id, Some(1)).await?;
if let Some(proof) = proofs.data.first() {
let result = nxf.proofs().verify(&proof.proof_id).await?;
println!("Proof valid: {}", result.valid);
}
Ok(())
}
Audit Reports
NexusForge undergoes regular third-party security audits. All audit reports are published in full to maintain transparency with the community. Below is a summary of completed audits.
Trail of Bits — December 2025
| Detail | Value |
|---|---|
| Auditor | Trail of Bits |
| Date | December 2025 |
| Scope | Core protocol contracts, execution layer, proof verification |
| Critical | 0 |
| High | 0 |
| Medium | 2 |
| Low | 5 |
| Status | All findings remediated |
Zellic — October 2025
| Detail | Value |
|---|---|
| Auditor | Zellic |
| Date | October 2025 |
| Scope | ZK verification contracts (Plonky3 verifier, aggregation logic) |
| Critical | 0 |
| High | 0 |
| Medium | 1 |
| Low | 3 |
| Status | All findings remediated |
OpenZeppelin — August 2025
| Detail | Value |
|---|---|
| Auditor | OpenZeppelin |
| Date | August 2025 |
| Scope | NXF token contracts, governance module, staking/delegation |
| Critical | 0 |
| High | 0 |
| Medium | 0 |
| Low | 4 |
| Status | All findings remediated |
Bug Bounty
NexusForge maintains an ongoing bug bounty program to incentivize the responsible disclosure of security vulnerabilities. We welcome reports from the security research community and pay competitive bounties for verified findings.
Reward Tiers
| Severity | Reward | Examples |
|---|---|---|
| Critical | Up to $50,000 | Loss of user funds, proof forgery, TEE escape, unauthorized minting |
| High | Up to $25,000 | Privilege escalation, bypassing verification, consensus manipulation |
| Medium | Up to $10,000 | Information disclosure, denial of service on critical paths, staking logic errors |
| Low | Up to $500 | Non-exploitable edge cases, informational findings, documentation discrepancies |
In Scope
- Smart contracts — All deployed protocol contracts (verifier, staking, governance, token)
- Protocol logic — Proof generation, verification, aggregation, cross-chain messaging
- Node software — nxf-node daemon, TEE attestation, enclave runtime
- API — Authentication, authorization, rate limiting, input validation
Out of Scope
- Frontend applications and marketing websites
- Social engineering, phishing, or physical attacks
- Denial-of-service (DDoS/volumetric) attacks
- Issues in third-party dependencies without a demonstrated exploit path
- Findings already reported or known
Responsible Disclosure Process
- Report: Email your finding to security@nexusforge.io with a detailed description, reproduction steps, and potential impact assessment.
- Acknowledgment: We will acknowledge receipt within 24 hours.
- Triage: Our security team will assess severity and confirm the finding within 72 hours.
- Remediation: We coordinate a fix and agree on a disclosure timeline (typically 90 days).
- Payout: Bounty is paid within 14 days of confirmed remediation, in USDC or NXF (your choice).
Contact
Email: security@nexusforge.io
PGP Key Fingerprint: 4A7B 8C9D 0E1F 2A3B 4C5D 6E7F 8A9B 0C1D 2E3F 4A5B
Public key available at nexusforge.io/.well-known/security-pgp.asc
Threat Model
NexusForge's security architecture is designed around a defense-in-depth approach. This section documents the primary threat categories, their assessed likelihood, and the mitigations the protocol employs to address each one.
Threat Categories
| Threat | Likelihood | Description | Mitigation |
|---|---|---|---|
| Operator Collusion | Medium | Multiple node operators coordinate to produce fraudulent execution results or censor specific agents. | Every execution is independently verified via zk-proofs posted on-chain. Collusion does not help because proofs are mathematically verified by the verifier contract, not by other operators. Random node assignment prevents targeted collusion. |
| TEE Bypass | Low | An attacker exploits a vulnerability in Intel SGX to extract secrets or tamper with enclave execution. | TEE serves as a first layer of defense but is not solely relied upon. All execution results are verified by zk-proofs. Attestation quotes are refreshed every epoch and checked against Intel's TCB recovery database. Nodes with outdated TCB are automatically suspended. |
| Proof Forgery | Very Low | An attacker constructs a valid-looking zk-proof for an execution that never occurred or produced different results. | Plonky3 proofs are computationally sound under standard cryptographic assumptions (collision-resistant hash functions). The on-chain verifier contract checks proof validity against the circuit's verification key, which is immutable once deployed. Forging a proof is equivalent to breaking the underlying hash function. |
| Sybil Attacks | Medium | An adversary creates many node identities to gain disproportionate influence over agent scheduling or reward distribution. | Minimum stake requirement of 10,000 NXF per node creates an economic barrier. Stake-weighted selection ensures influence is proportional to capital at risk. Quadratic staking penalties discourage splitting stake across many nodes controlled by the same entity. |
| Cross-Chain Replay | Low | An attacker replays a valid cross-chain message on a different chain or at a different time to trigger duplicate actions. | All cross-chain messages include a chain-specific nonce, destination chain ID, and expiration timestamp. The messaging layer maintains a nonce registry on each destination chain to reject replayed messages. Messages expire after 2 epochs (12 hours). |
Defense-in-Depth Approach
NexusForge does not rely on any single security mechanism. The protocol layers multiple independent defenses so that a breach in one layer does not compromise the system:
- Layer 1 — Economic Security: Operators stake NXF tokens that are slashed for misbehavior, creating a direct financial incentive for honest execution.
- Layer 2 — Hardware Isolation: Agent code runs inside attested TEEs (Intel SGX), preventing the operator from observing or tampering with execution at the hardware level.
- Layer 3 — Cryptographic Verification: Every execution produces a zk-proof (Plonky3) that is verified by an immutable on-chain contract. This provides mathematical certainty of correct execution regardless of TEE integrity.
- Layer 4 — Deterministic Replay: Execution environments are fully deterministic, allowing any party to replay an agent's logic and independently verify the result matches the posted proof.
- Layer 5 — Governance & Social Layer: Protocol upgrades go through a timelocked governance process with a 7-day voting period and 48-hour execution delay, allowing the community to review and veto malicious proposals.
Incident Response
In the event of a security incident, NexusForge follows a structured response process:
- Detection: Automated monitoring alerts on anomalous proof failures, unexpected slashing events, or attestation irregularities.
- Containment: The protocol includes a guardian multisig (4-of-7) that can pause specific contracts within minutes if a critical vulnerability is actively exploited.
- Investigation: The security team analyzes on-chain data, node logs, and proof records to determine root cause and blast radius.
- Remediation: A fix is developed, audited by an independent reviewer, and deployed through an emergency governance process (24-hour expedited timelock).
- Post-Mortem: A detailed post-mortem is published within 7 days, including timeline, root cause analysis, impact assessment, and preventive measures.