Most post-mortems on failed Solana MEV bots don't get written. The team moves on, the strategy gets shelved, and the infrastructure decisions that caused the failure never get documented. What follows is a reconstructed case — composite, but technically accurate — of the kind of setup that teams bring to providers like rpcfast.com after months of wondering why a strategy that worked in backtesting produces nothing in production.
The bot was an arbitrage searcher targeting SOL/USDC price gaps across Raydium and Orca Whirlpool. The strategy was sound. The execution was broken in ways that took three months to fully diagnose.
The initial configuration was reasonable by most standards. A paid shared RPC endpoint from a well-known provider. WebSocket subscriptions to both pool accounts. A TypeScript bot using @solana/web3.js with Jito bundle submission. Slippage set at 50 basis points. A profit threshold of 15 BPS after fees.
Devnet testing showed consistent opportunity detection. The bot identified price gaps, constructed transactions, and simulated profitable routes. The team went live with confidence.
On mainnet, the bundle acceptance rate was 34%.
The data feed problem.
The bot subscribed to Raydium and Orca pool accounts via WebSocket. Under normal conditions, updates arrived in roughly 150–200ms. That sounds fast. On Solana, where a slot is 400ms, it means the bot was working with state that was already half a slot old by the time it saw it.
The more serious issue: WebSocket delivery is not uniform. Under congestion — exactly the conditions where arbitrage gaps appear — update latency spiked to 400–600ms. The bot was detecting price gaps that had already closed by the time it submitted a bundle. It was competing in a race it had already lost before the starting gun.
Other searchers on the same opportunities were running Yellowstone gRPC, receiving account updates directly from validator memory at sub-30ms. The WebSocket bot wasn't losing by a little. It was operating in a fundamentally different latency tier.
The slot lag problem.
The shared RPC node was running 2–3 slots behind the network tip during peak hours. This compounded with the WebSocket delay in a way the team hadn't modeled. When the bot detected an opportunity and fetched a fresh blockhash, that blockhash was sometimes 3–4 slots old by the time the bundle was submitted. Not expired — but old enough that the transaction was referencing state the current leader had already moved past.
The result was a category of failures the team initially misread as "slippage exceeded." In reality, the route had changed between state observation and bundle submission. The arbitrage gap had closed, the price had moved, and the simulation that ran on submission was working from stale data.
The Jito tip problem.
The bot used a fixed Jito tip of 0.0001 SOL. This was calibrated during a period of low network activity. In production, competitive searchers were bidding 50–60% of expected profit — and during high-activity windows, tips were running 5–10× higher than the bot's fixed ceiling.
A Jito bundle with an insufficient tip doesn't fail with an error. It gets deprioritized. The bundle submits successfully from the bot's perspective, sits in the block engine queue below higher-paying bundles, and by the time it would have been included, the opportunity is gone and the bundle expires. The bot's logs showed successful submissions. The on-chain landing rate told a different story.
The geographic problem.
The shared RPC node was in a US West Coast data center. The majority of high-stake Solana validators — and therefore most slot leaders — are in US East. Every submission added 60–80ms of cross-country latency to the delivery path. For a strategy targeting sub-slot execution windows, that margin was consistent disqualification.
The migration wasn't a single change. It was a stack replacement.
WebSocket subscriptions were replaced with Yellowstone gRPC. Account update latency dropped from 150–200ms average (with 400–600ms spikes under load) to under 30ms consistently. The bot was now seeing state changes in the same slot they happened.
The shared RPC was replaced with a dedicated bare-metal node in a US East data center, colocated near high-stake validators. Slot lag dropped to zero. Blockhash fetch timing became reliable.
The fixed Jito tip was replaced with dynamic calibration: a percentage of estimated profit per opportunity, with a floor and a ceiling, adjusted based on real-time bundle acceptance rate feedback. The tip calibration alone recovered roughly 40% of the opportunities that had been silently losing.
Parallel submission to both Jito and bloXroute was added for leader diversity. bloXroute's leader-aware routing covered edge cases where the current slot leader had weaker Jito connectivity.
After four weeks on the new stack, bundle acceptance rate went from 34% to 81%. Landing rate — transactions that actually appeared on-chain within three slots — went from 61% to 94%.
The original bot wasn't badly built. The strategy logic was correct. The opportunity detection worked. The transaction construction was sound. None of that mattered because the infrastructure underneath it was operating in a different performance tier than the competition.
MEV on Solana is not a strategy problem. For most teams that struggle with it, it's a latency problem — specifically, the accumulated latency of every architectural decision that sits between the on-chain event and the landed transaction. Data feed delay. Slot lag. Submission path. Tip competitiveness. Geographic distance from the leader.
Each of those gaps is small in isolation. Combined, they're the difference between a 34% bundle acceptance rate and an 81% one. And unlike strategy edge — which can be competed away — infrastructure edge is durable. The searchers at the top of the stack have it. Most everyone else is catching up.
