Low-Latency Solana Playbook for HFT Traders
Powered by Dysnix/RPC Fast infra & DevOps services.
High-frequency trading (HFT) on Solana demands sub-millisecond reaction times and guaranteed transaction landing. Below is the recommended setup for traders, bots, and protocols that want the lowest possible latency.
It is possible to optimise Solana latency by combining services (bloXroute, Jito ShredStream + low-latency send, Yellowstone/other gRPC Geyser feeds) plus infra tuning and colocation – but the gains, costs, and trade-offs (centralization/MEV exposure, complexity) vary.
1. Infrastructure foundation: bare-metal + tuning by RPC Fast
Hardware: RPC Fast provisions the most efficient bare-metal servers (AMD EPYC TURIN 9005 series, AMD EPYC GENOA 9354 512GB-1.5TB RAM).
Location: Servers colocated in proximity to Solana leader nodes constellations. Optimized regions for using bloXroute SWQoS and Jito are Frankfurt, London, and NY (OVH, Latitude, Equinix, TeraSwitch datacenters).
Tuning & Maintenance:
Performance optimization: node configuration for high throughput and low latency. Our engineers reduce base network latency and variance before layering Solana-specific optimizations.
Recommended methods for efficient event monitoring: Jito ShredStream gRPC, Yellowstone gRPC, bloXroute TX streamer, bloXroute OFR shreds parsing. Continuous benchmarking ensures infra adapts as Solana’s leader distribution changes. Check out performance test results here.
Proactive monitoring & alerts.
Sync and network health; full managed service – RPC Fast keeps infra always at peak.
2. Market data ingestion – get shreds faster
Jito ShredStream → direct feed of leader-produced shreds. Traders see block data hundreds of ms earlier than waiting on Turbine/gossip.
bloXroute OFR / BDN → globally optimized relay delivering shreds with 30–50+ ms gains vs default propagation. However, according to our tests, Jito performs better than bloXroute OFR (as of summer 2025).
Yellowstone gRPC (Geyser plugin) → structured, filtered, and low-latency streams for accounts, slots, and transactions. Perfect for strategy logic, dashboards, and monitoring.
RPC Fast integrates the feeds requested by the client, based on most efficient performance and client’s specific requirements.
3. Transaction submission – land first, not last
Jito Low Latency Transaction Send (or Jito Block Engine transaction submission) → direct, priority transaction submission with bundle support for MEV-aware trading.
BloXroute Trading API → RPC Fast partnered with bloXroute to bring SWQoS benefits to our clients. The Solana Trading API provides blazing-fast transaction propagation on Solana network (83% of transactions land first compared to public RPC), as well as additional features, such as MEV protection.
Colocation Peering → servers are placed in the right spots to minimize latency to leaders.
Result: transactions arrive earlier at the leader and get a better shot at first-slot inclusion.
4. Trade-offs to keep in mind
Centralization & MEV: Relying on Jito/bloXroute means aligning with their relay ecosystems.
Cost: Bare-metal, colocation, and premium feeds are not cheap – but for HFT, speed is profit.
Complexity: RPC Fast abstracts this away with full DevOps + infrastructure management.
Recommended Setup (Step-by-Step)
Provision bare-metal high-performance server in leader-adjacent DCs (handled by RPC Fast).
Enable Jito ShredStream (or bloXroute OFR) for fastest data (integrated by RPC Fast during the node setup by default).
Use Yellowstone gRPC for structured feeds (integrated by RPC Fast during the node setup by default).
Route transactions via Jito Block Engine or bloXroute Trader API.
RPC Fast engineers will assist in continuous benchmarking & performance tuning via dedicated communication channels.
Scale horizontally with multi-region redundancy, if needed.
With RPC Fast handling infra + optimization, HFT teams can focus entirely on strategy and execution, knowing their Solana connection is as fast, stable, and tuned as physics allows.
Parallel Submission
Broadcasting a single signed transaction to multiple RPC/relay endpoints in parallel increases the probability that (a) at least one path delivers it quickly to the current leader and (b) it avoids single-path congestion or transient RPC failures – as long as you handle blockhash expiry, deduplication, and monitoring correctly.
RPC Fast provides several options for sending transactions (RPC dedicated nodes, bloXroute Trading API, Jito Block Engine transaction submission) and we recommend using all of them at the same time.
Why parallel submission helps
Multiple network paths → lower tail latency. Different RPC providers and relays have different peering, geographic placement, and routing to validators/leaders. Sending to several reduces p95/p99 propagation time to the leader (race-to-leader effect). The transaction will only go through using one path (and fail/reverse in others) and will be free of charge. Yet, be aware of RPC rate limits regardless of success.
Different relayer ecosystems catch different leaders. Some relays (Jito, bloXroute OFR, private relayers) have direct/private routes to leaders or trader-focused routing; sending to them in parallel exploits whichever has the fastest path to the current leader.
Race improves inclusion odds vs. queueing/MEV competition. If there’s contention for a slot, the node that reaches the leader first (or the relay that hands higher-priority tx to a block-builder) has the advantage. Broadcasting broadly turns latency into a competitive edge.
Quick checklist to try this today
Pick 3–5 endpoints (include at least one leader-aware relay: Jito or bloXroute). docs.bloxroute.com
Implement parallel fan-out of the same signed tx to all endpoints.
Listen for
sendTransaction
acceptance +getSignatureStatuses
. Stop after first accepted and track finalization.Repeat under different network conditions and collect p50/p95/p99.
Add durable nonce if you require multi-minute retry windows.
Risks & trade-offs
Cost & rate limits: Broadcasting to many paid endpoints increases usage & cost and may hit RPC rate limits. Budget accordingly.
Increased attack surface / centralization: Over-reliance on a single relay provider undoes redundancy goals; diversify relays. Also be explicit about MEV exposure – some relayers may reorder or insert MEV-enabled behavior. DEV Community
False positives on “accepted”: Some RPCs report accepted locally but may drop the tx before it reaches the leader (always follow up with network status checks). Medium
Last updated