Dreamspace + Azure: Myth‑Busting the 12,000 TPS Claim on Base L2
— 7 min read
Hook: The 12,000 TPS Headline Isn’t a Fluke
Short answer: yes - the 12,000 transactions-per-second (TPS) Dreamspace posted in its latest benchmark is reproducible and not a marketing stunt. In a controlled 30-minute test, Dreamspace’s rollup on Base consistently averaged 12,000 TPS, peaking at 13,200 TPS during short bursts. That shatters the usual Base ceiling of roughly 4,000 TPS that most public reports quote for the OP-Stack L2.
What makes the result credible? The test used a fully automated transaction generator that mimics a real-world NFT marketplace: 1,000 distinct wallets each submitted 12 transactions per second, covering ERC-721 minting, transfers, and simple ERC-20 swaps. The load was spread across three Azure regions (East US, West Europe, Southeast Asia) to verify cross-regional stability. All nodes ran on Azure Lsv2-32 and Lsv2-64 virtual machines, each equipped with 32-64 vCPUs, 256-512 GiB RAM, and Premium SSD v2 storage capable of 80,000 IOPS.
Because the benchmark was repeated three times with identical configurations and produced the same throughput each run, the community can trust that the headline isn’t a one-off hype stunt.
Now that we’ve established the numbers, let’s unpack why the stack works and how Azure’s magic sauce fits in.
Myth-Busting the 12,000 TPS Claim
Key Takeaways
- Dreamspace used Azure Lsv2 VMs, Premium SSD v2, and low-latency networking.
- The benchmark ran for 30 minutes with 1,000 wallets, each sending 12 tx/s.
- Average throughput: 12,000 TPS; peak: 13,200 TPS; error rate: <0.01%.
- Results reproduced across three Azure regions.
First, let’s unpack the methodology. Dreamspace employed the open-source tool loadgen (v2.4) configured with a fixed transaction rate of 12 tx/s per wallet. The tool timestamps each transaction, records receipt latency, and flags any dropped or reverted tx. Over the 30-minute window, the system logged 21.6 million transactions, of which 99.99% were confirmed on-chain within 2.1 seconds on average.
The hardware stack is the next piece of the puzzle. Azure’s Lsv2 series provides up to 2.5 GHz sustained CPU performance and hyper-threaded cores, which reduces the queueing delay in the rollup’s sequencer. Premium SSD v2 delivers up to 80,000 IOPS and 3,000 MB/s throughput, ensuring that state-root writes never become a bottleneck. Moreover, Azure’s accelerated networking offers sub-2 ms intra-region latency, a critical factor for the Base sequencer’s consensus round, which runs every 2 seconds.
Think of it like a high-speed kitchen: the CPU is the chef, the SSD is the prep station, and the network is the delivery runner. If any one of those moves slower than the others, the whole dinner gets delayed. In Dreamspace’s case, every component was operating at top speed.
Finally, Azure-specific optimizations such as managed identity for secure node communication and Azure Monitor’s custom metrics for real-time gas-price adjustments kept the gas market stable. The gas price remained at 0.0005 ETH throughout the test, avoiding the spikes that usually inflate latency.
"During the 30-minute run, Dreamspace recorded an average block time of 1.97 seconds, compared to Base’s default 2.0-second target."
All these factors combined prove that the 12,000 TPS figure is a repeatable outcome of a well-engineered stack, not a cherry-picked snapshot. Next up, we’ll see exactly how Azure’s elasticity makes Base’s L2 scalability feel like adding lanes to a highway on the fly.
How Azure Integration Supercharges Base’s L2 Scalability
Azure’s elastic compute model is the secret sauce that lets Base stretch its throughput limits. Think of Base as a highway and Azure as a fleet of autonomous trucks that can add lanes on demand. When Dreamspace ramps up transaction volume, Azure automatically provisions additional Lsv2 instances without manual intervention, thanks to Azure Autoscale rules tied to CPU and network metrics.
Low-latency networking is another cornerstone. Azure’s Accelerated Networking provides up to 30 Gbps of network bandwidth per NIC and guarantees less than 2 ms round-trip time within a region. For a rollup, every millisecond saved in the sequencer-validator communication translates directly into higher TPS. In Dreamspace’s setup, the sequencer node and its validator peers were placed in the same Azure Virtual Network, eliminating cross-cloud jitter and keeping the consensus round tight.
Managed storage also plays a pivotal role. Base’s rollup stores state diffs on-chain, but the bulk of data lives in off-chain storage for fast retrieval. Dreamspace used Azure Blob Storage with Hot tier access, achieving read latencies of 1.2 ms and write latencies of 1.5 ms. This is dramatically faster than typical cloud-agnostic object stores, which hover around 5-10 ms.
Beyond raw performance, Azure’s built-in security services (Azure Key Vault, Managed Identities) simplify node authentication, reducing the risk of man-in-the-middle attacks that could throttle the network. Dreamspace leveraged Azure Policy to enforce a uniform configuration across all rollup nodes, ensuring that every instance ran the same Docker image with identical runtime flags.
Pro tip: Pair Azure Autoscale with a custom alert that watches the rollup’s pending-tx queue. When the queue crosses a threshold, spin up a new sequencer node before the backlog becomes visible to users.
The net effect? A seamless pipeline where transaction ingestion, sequencing, and state commitment happen in lockstep, allowing Base to sustain double-digit-thousand TPS without hitting resource ceilings. With that foundation in place, the next logical question is: can we push the envelope even further?
Future-Proofing: Scaling Beyond 12,000 TPS
Dreamspace isn’t resting on 12,000 TPS; the architecture is built to accommodate the next wave of demand from NFT marketplaces, Layer-3 rollups, and AI-driven fraud detection. To illustrate, let’s consider a hypothetical NFT drop that expects 500,000 concurrent users. If each user initiates a mint transaction at a rate of 0.5 tx/s, the system would need to handle 250,000 tx/s. While that is far beyond today’s Base limits, Dreamspace’s Azure-centric design can incrementally add compute and network resources to bridge the gap.
One concrete path is the introduction of Azure Spot VMs for cost-effective burst capacity. Spot VMs can be provisioned at up to 90 % discount compared to regular pay-as-you-go rates, and they can be used to run non-critical validator nodes that absorb excess load during peak periods. Dreamspace’s testing shows that adding 20 Spot VMs (each with 16 vCPUs) can increase throughput by roughly 2,500 TPS.
Another lever is the upcoming Azure Virtual Machine Scale Sets (VMSS) with Predictive Autoscale. By feeding the rollup’s pending-tx queue length into Azure Monitor, the system can predict spikes and spin up additional sequencer instances pre-emptively, shaving off seconds of latency before the surge hits.
Layer-3 rollups, which sit atop Base, will also benefit. Dreamspace plans to deploy a Layer-3 that aggregates micro-transactions into batches of 500 before submitting them to Base. This batching reduces the number of on-chain calls by a factor of 500, effectively multiplying the apparent TPS for end-users.
Finally, AI-driven fraud detection can run as an Azure Machine Learning endpoint that scores each transaction in under 1 ms. By rejecting malicious tx early, the system preserves bandwidth for legitimate traffic, keeping the effective TPS high even under adversarial conditions.
Pro tip: When you spin up Spot VMs, tag them with a lifecycle policy that gracefully drains them before Azure reclaims the capacity. That way you never lose a block’s worth of work mid-flight.
With Azure’s roadmap pointing to even faster networking fabrics and VM families that pack more cores into a single socket, the ceiling is likely to keep rising throughout 2026 and beyond.
Conclusion: From Myth to New Baseline
The Dreamspace-Azure partnership rewrites what developers can expect from an OP-Stack L2 like Base. By marrying Azure’s elastic compute, ultra-low latency networking, and managed storage with a rigorously tested rollup architecture, Dreamspace turned a headline-grabbing 12,000 TPS figure into a realistic, repeatable benchmark.
For the broader ecosystem, this means the old notion of “Base can only do a few thousand TPS” is now outdated. Teams building high-volume NFT marketplaces, DeFi protocols, or AI-enhanced analytics can aim for double-digit-thousand TPS as a baseline rather than an exception. As Azure continues to roll out next-gen VM families (e.g., Dsv5 with up to 128 vCPUs) and even faster networking fabrics, the ceiling will keep rising.
Bottom line: the myth has been busted, the performance validated, and the path forward clearly charted. Dreamspace’s work shows that with the right cloud partnership, L2 scalability is limited only by imagination, not by the underlying protocol.
FAQ
Q? How was the 12,000 TPS measured?
A. Dreamspace ran a 30-minute load test with 1,000 distinct wallets, each sending 12 transactions per second. The test used Azure Lsv2 VMs, Premium SSD v2, and Azure’s accelerated networking. Over the run, the average confirmed transaction rate was 12,000 TPS with a peak of 13,200 TPS and an error rate below 0.01%.
Q? What Azure services are critical for this performance?
A. The key services are Lsv2 virtual machines (for compute), Premium SSD v2 (for fast disk I/O), Accelerated Networking (for sub-2 ms latency), Azure Blob Storage Hot tier (for off-chain data), and Azure Monitor with Autoscale (for dynamic resource provisioning).
Q? Can the same setup be used for other L2s besides Base?
A. Yes. The architecture is L2-agnostic as long as the rollup supports Ethereum-compatible transaction formats. Azure’s compute and networking benefits apply equally to Optimism, Arbitrum, and zk-Rollups that can run on standard x86 infrastructure.
Q? What is the cost implication of running at 12,000 TPS on Azure?
A. A rough estimate for the benchmark configuration (four Lsv2-64 VMs, Premium SSD v2, and networking) is about $7,800 per month in the East US region. Using Spot VMs for non-critical nodes can cut the total cost by up to 70% during peak load.
Q? How will future Azure VM generations affect Base’s scalability?
A. Next-gen Azure VMs like the Dsv5 series will offer up to 128 vCPUs and 1.5 TB of RAM, along with 40 Gbps networking. Those upgrades will enable Base rollups to handle higher transaction parallelism, potentially pushing sustained throughput beyond 20,000 TPS with the same architectural pattern.