C++, Rust, and High-Frequency Trading: Where Deterministic Latency Decides the Argument
Introduction
Programming-language debates are usually tolerated because most systems can afford a little theater. A service is a bit inefficient, a queue gets wider than it should, a retry policy does something morally questionable, and everyone keeps moving because the product still works, the revenue still lands, and the latency chart is ugly in a survivable way. Teams can spend weeks arguing about purity because the system itself is polite enough not to slap them immediately.
High-frequency trading is less sentimental. It does not care which language won the internet this quarter, which conference talk had the cleanest slides, or which rewrite initiative made people feel like the future had finally arrived. It cares whether market data becomes state, state becomes a decision, and the decision becomes an order before the window closes. In that kind of environment, elegant opinions that cannot survive measurement get mugged quickly and usually without warning.
That is why the question of C++ and Rust in HFT is interesting. Not because one language is holy and the other is fraudulent, but because HFT is one of the rare domains that forces the whole argument to cash out in actual system behavior. The hot path either keeps its shape under pressure or it does not. Tail latency either remains disciplined or it does not. Replay either tells the truth or it does not. Architecture is not a personality test there. It is an invoice with a clock attached.
And HFT is unusually honest about where the bill comes from. It is not only the matching or execution window itself. The costs accumulate in feed parsing, book maintenance, serialization, cross-core communication, jitter under load, and all the tiny “probably fine” decisions that become public humiliation once the system is subjected to real venue traffic. The market has a cruel gift for converting vague engineering language into exact losses.
This is also why the answer is not "C++ forever" or "rewrite everything in Rust because safety is good and fear is a business model." The more honest answer is narrower and therefore more useful. C++ still dominates the hottest HFT paths because the surrounding world of tooling, feed handling, memory control, profiling, and hardware-adjacent practice remains extremely C++ shaped. Rust is genuinely useful around that core, and sometimes inside carefully chosen parts of it, but it does not erase the basic fact that low-latency trading punishes abstraction mistakes faster than most teams can rename the initiative.
So the right conversation is not about identity. It is about system boundaries. Which parts of the stack need brutal control over memory, layout, queues, affinity, and wire behavior? Which parts benefit most from stronger correctness constraints and safer defaults? Which parts deserve hybrid treatment instead of tribal purity? Those questions are far less glamorous than language sermons, but they are also the questions that survive contact with production and the questions that let teams cooperate around evidence instead of slogans.
Why HFT Makes Bad Technical Philosophy Look Expensive
HFT is unusually good at exposing a familiar engineering lie: the lie that average behavior is enough. In many ordinary products, a system can remain respectable while hiding occasional chaos behind throughput, retries, or user patience. In HFT, average latency is interesting, but tail behavior is often the part that actually humiliates you. A system that looks fast until it twitches at the wrong time is not a fast system in any commercially meaningful sense. It is a confidence trick with a benchmark attached.
That is why HFT engineers become allergic to imprecise abstractions. They learn that one extra allocation on the hot path is not "just one allocation." It is a possible source of jitter. One queue hop is not "just one queue hop." It is another place where time gets stored, coordination expands, and visibility becomes worse. One cache-hostile structure is not just an aesthetic blemish. It is an ongoing tax on every market event that passes through the system. Multiply that by real feed volume and suddenly a design choice from a slide deck becomes a recurring line item in the budget for disappointment.
The cruelty of the domain is that it also punishes partial explanations. A team may identify one obvious latency source and still miss the real offender because the chain is cumulative. A memory layout decision widens the cache miss profile. That widens the queueing window. That changes how the system behaves under burst traffic. That changes the order in which "rare" branch behavior shows up. Then someone walks into the postmortem and says the problem was "network noise," which is engineering code for "we have not finished telling the truth yet."
Rust enters this conversation with legitimate force because memory safety matters, concurrency correctness matters, and systems code deserves better defaults than "please be careful while juggling knives over a pit." That part is true. But HFT does not reward truth in isolation. It rewards combined truth. Safety matters, yes. So do mature feed handlers, stable ABI boundaries, replay tooling, profile-driven iteration, mature exchange integration culture, and the ability to inspect exactly what the machine is doing when the market is unkind. C++ still arrives with more of that surrounding infrastructure in most HFT environments.
This is one reason buyers and engineering leaders should resist purity narratives. A language may be excellent in a narrow dimension and still be the wrong default for the most timing-sensitive part of a stack if the surrounding ecosystem, tooling, and team experience do not support the actual delivery path. HFT is where lovely local truths go to learn that the whole path still matters more. The stack does not care that one claim was morally elegant if the resulting system is operationally incoherent.
It is also where team ecology matters more than people admit. A cooperative team with a disciplined replay harness, a shared latency vocabulary, and boringly good profiling habits will usually outperform a more fashionable team that keeps confusing taste for evidence. HFT rewards strong technical culture more reliably than it rewards fashionable migration energy.
The Stack Is Not One Thing, So the Language Choice Should Not Pretend Otherwise
One of the dumbest mistakes in serious systems work is speaking about "the HFT stack" as though it were a single technical organism with one preferred language. It is not. It is a collection of paths with very different pressures and failure costs.
The market-data ingest path has one temperament. The order-book update path has another. Strategy logic may be numerically dense but structurally narrow. Risk checks are often latency-sensitive but also correctness-sensitive in a boring, adult, legally consequential way. Simulation and replay infrastructure may prize determinism and introspection over raw nanosecond vanity. Control-plane tooling, deployment helpers, and operator surfaces care about reliability, maintainability, and integration hygiene far more than they care about shaving five microseconds from a path no customer will ever see.
There are also human-operational differences between those layers. Some paths are modified daily by a wider team. Some are touched rarely and only under supervision. Some components need aggressive traceability because compliance or audit will eventually ask hard questions. Some need only tight bounded performance and excellent replay. Treating those as one decision is how organizations end up either over-modernizing calm components or under-governing dangerous ones.
This matters because it is often where a sensible C++ and Rust conversation begins. C++ remains strongest when the path is brutally hot, hardware-conscious, integration-heavy, and already surrounded by years of native operational practice. Rust becomes more attractive when the path is still important but the economic value of stronger defaults, clearer ownership, and narrower memory-risk exposure outweighs the cost of ecosystem friction.
In practice, that often leads to hybrid outcomes. The hottest feed-handling and gateway paths stay in C++. Replay tools, config validation, certain risk-side helpers, message-normalization utilities, audit tools, or internal operator-facing components may be excellent Rust candidates. This is not indecision. It is architectural adulthood. The system is being treated as a set of real boundaries rather than as a language fandom with a datacenter.
And this is where many rewrite proposals finally get honest. Once a team maps the stack path by path, the fantasy of a single universal answer tends to collapse. That collapse is healthy. It gives the organization permission to optimize for evidence, maintainability, operational trust, and delivery rhythm instead of for the emotional comfort of having a simple slogan.
Where C++ Still Owns the Hottest Paths
C++ keeps its place in HFT for reasons that are less mystical than outsiders sometimes imagine. The first reason is memory and layout control. HFT hot paths care about which data lives together, how structures behave in cache, how ownership shows up under load, and whether the system can remain allocation-disciplined when the market stops being polite. C++ still gives engineers unusually direct leverage over those choices, and it does so inside an ecosystem that has already spent decades learning which "small" costs are secretly large.
The second reason is tooling density. C++ in HFT does not mean only a language. It means compilers, sanitizers, flame graphs, perf, VTune, replay harnesses, exchange adapters, queueing folklore, allocator expertise, and a vast body of performance war stories accumulated under financial pressure. Teams do not start from zero there. They inherit a deep operational culture, and that culture matters because HFT rewards measured iteration far more than rhetorical cleanliness.
The third reason is integration gravity. Exchanges, native network paths, packet capture tooling, kernel-adjacent optimization, FPGA-adjacent infrastructure, and the whole low-latency ecosystem are still very comfortable in a C and C++ world. Rust can interact with that world, and sometimes very effectively, but "can interact with" is not the same thing as "is the path of least friction through the whole system." In serious HFT, friction is not an emotional inconvenience. It is a possible latency tax, a debugging tax, and a delivery tax at the same time.
There is also a subtler reason that matters more in the AI era: C++ simply has more operational memory available around this work. AI coding systems, code search, public examples, vendor snippets, optimizer folklore, and debugging trails are denser around C++ in low-latency systems than around Rust. That does not make C++ nobler. It makes it easier for humans and AI tools to collaborate inside ugly real codebases whose charm expired years ago.
Another advantage is that many HFT teams have already turned C++ into institutional muscle memory. They know how to profile it under venue pressure. They know how to strip allocations out of suspicious paths. They know what "fast enough in the microbenchmark" sounds like when it is about to become false in production. That lived knowledge is not romantic, but it is operationally precious. A team that already knows how to keep C++ honest should not throw that away lightly just because a newer language feels cleaner in isolation.
This is why C++ remains strongest not merely as syntax, but as a surrounding craft system. Once you add internal libraries, test harnesses, capture tooling, thread-affinity habits, release muscle, and diagnosis workflows, you are no longer comparing one language to another in a vacuum. You are comparing one whole delivery ecosystem to another. In HFT, ecosystems often beat ideals.
Where Rust Actually Helps Instead of Performing Morality
Rust helps most when it is solving a real problem rather than acting as a personality accessory for architecture diagrams. In HFT, the strongest Rust use cases often appear around the hot core rather than in the absolute center of it.
Rust is useful for components where correctness failures are expensive but the latency budget is not being measured with a microscope. Message validation layers, config and deployment tooling, certain protocol-normalization paths, control services, administrative utilities, offline analyzers, and internal operators’ tools can benefit from the language’s bias toward explicitness. The point there is not to look modern. The point is to reduce the class of dumb, repetitive, structurally avoidable mistakes that drain attention from more important work.
Rust can also help in carefully chosen near-hot components when the team has the right expertise and the boundary is honest. A low-latency parser, a bounded state machine, or a piece of deterministic infrastructure may be a solid Rust candidate if the team can keep the FFI and allocation story under control and if the surrounding ecosystem burden is understood in advance rather than discovered at 2:40 in the morning during a rollout nobody wanted.
It is also often helpful in the work around the work. Capture processors, offline book replayers, auditing helpers, deployment validation, or strategy-adjacent infrastructure benefit from stricter ownership and clearer interfaces even when they are not the absolute nanosecond battleground. Those pieces matter because they determine how fast the team can diagnose incidents, reproduce anomalies, and move safely from suspicion to verified understanding.
But this is exactly where teams need discipline. Rust is not valuable when it is dropped into the middle of a native trading stack as a faith-based renovation. It is valuable when the boundary is clean, the measurement path is obvious, and the operational cost of the integration is lower than the safety or maintainability gain it creates. Otherwise, the project becomes a beautiful case study in how to spend serious engineering time moving uncertainty sideways.
That is the real anti-pattern to avoid: not using Rust, but using it to disguise the absence of architectural clarity. If the team cannot explain where ownership begins, where latency is measured, where buffers cross, and where recovery happens under stress, changing the language will not save the design. It will just make the failure more bilingual.
The Boundary Matters More Than the Sermon
A common mistake in C++ versus Rust discussions is assuming that using Rust automatically removes danger. It does not. It changes where the danger sits. In HFT, that boundary question is especially important because hot paths rarely end at the language line. They end at network boundaries, queue boundaries, scheduling boundaries, FFI boundaries, and data-layout boundaries.
If a Rust component must cross into a C++ exchange adapter, speak to a native queue, hand data to a strategy engine with tight layout assumptions, or maintain deterministic behavior across boundary transitions, then the real engineering work is not "we used Rust." The real work is how carefully the seam was defined and verified. Unsafe behavior can still arrive through ABI mismatch, ownership confusion, hidden copies, queueing mistakes, or timing surprises. The language alone is not your governance model. The boundary is.
This is why mature teams talk about a narrow hot path and a narrow unsafe surface. They do not rely on slogans like "memory safety by default" to solve what is fundamentally a system design problem. Good teams ask uglier and therefore more useful questions. Where does the copy happen? Where is the queue hop? Which side owns the buffer? Which path allocates? What happens during backpressure? What is replayable? What can be benchmarked in isolation, and what must be benchmarked end-to-end because local wins have a long tradition of becoming global disappointments?
The payoff of boundary clarity is not only performance. It is organizational sanity. Once seams are explicit, teams can split responsibility without losing accountability. Performance engineers know what to measure. Platform people know what not to touch casually. Security and risk teams can reason about failure surfaces. Buyers and leadership get a technical read that stops the room from arguing in circles. A good boundary does not merely help the compiler. It helps the company.
That is one reason HFT architecture benefits from a cooperative, ecosystem-minded posture rather than a hero posture. The best systems are usually not built by one person proving purity. They are built by a group agreeing on interfaces, measurements, and failure ownership so thoroughly that the system remains explainable even when the market stops being friendly.
Practical Cases Worth Solving First
The smartest first project is rarely "rewrite the hot path." That is the technical equivalent of entering a house and deciding the first useful act is to replace the entire skeleton before checking which pipe is already flooding the kitchen.
The better first project is one of these:
Feed-handler evidence work
If the team argues about whether parsing, normalization, queueing, or handoff is really the latency problem, build the evidence path first. Capture representative traffic, replay it deterministically, and force the system to confess where time and jitter are actually entering the chain. Most HFT systems do not need more ideology here. They need a better lie detector.
That evidence path should be good enough that the same trace can be used by performance people, developers, and leadership when a hard prioritization call is needed. Once the timing story becomes shared, half the political friction vanishes because the room is no longer negotiating with rumors.
Gateway and risk boundary cleanup
Many stacks are not ruined by the core strategy logic. They are ruined by boundary sloppiness between risk, gateway logic, and operational coordination. A careful rewrite or restructuring at those seams can improve reliability and diagnosability without the commercial risk of touching the absolutely hottest loop first.
This is also where stronger language guarantees can create real economic value. If order validation, throttling, and risk messaging become easier to reason about, the whole system gets calmer. Calm systems are faster to change and cheaper to govern.
Hybrid control-plane cleanup
If operator tooling, deployment helpers, recovery utilities, or replay tools are fragile, Rust can be a strong candidate there. These components often shape the health of the whole organization even when they do not sit in the fastest microsecond path. Cleaner tools can make the hot system calmer without pretending that every binary in the estate deserves the same language.
The hidden win here is health. Better tooling reduces the amount of 2 a.m. archaeology a team must perform just to answer basic operational questions. That means fewer emergency rituals, fewer brittle one-person systems, and better long-term technical cooperation. In mature engineering organizations, that matters more than people say out loud.
Hands-On Lab: Build a tiny sequence-gap detector and make it honest
Let us keep the lab small and useful. HFT systems live and die by sequence discipline long before they reach glamorous strategy logic. This toy program replays a feed-like stream and reports where gaps appeared.
The point of the exercise is not that you will deploy this exact code. The point is to teach the habit of preserving deterministic state under pressure. Sequence discipline is one of the first places where a trading system either proves it respects evidence or proves it is still bluffing.
main.cpp
#include <cstdint>
#include <iostream>
#include <string>
#include <vector>
struct Packet {
std::uint64_t seq;
std::string payload;
};
struct Gap {
std::uint64_t expected;
std::uint64_t received;
};
class GapDetector {
public:
void on_packet(const Packet& packet) {
if (!started_) {
expected_ = packet.seq + 1;
started_ = true;
return;
}
if (packet.seq != expected_) {
gaps_.push_back({expected_, packet.seq});
}
expected_ = packet.seq + 1;
}
const std::vector<Gap>& gaps() const {
return gaps_;
}
private:
bool started_ = false;
std::uint64_t expected_ = 0;
std::vector<Gap> gaps_;
};
int main() {
std::vector<Packet> replay{
{1001, "AAPL bid"},
{1002, "AAPL ask"},
{1003, "MSFT bid"},
{1007, "MSFT ask"},
{1008, "NVDA bid"},
{1011, "NVDA ask"}
};
GapDetector detector;
for (const auto& packet : replay) {
detector.on_packet(packet);
}
if (detector.gaps().empty()) {
std::cout << "no gaps\n";
return 0;
}
for (const auto& gap : detector.gaps()) {
std::cout << "gap expected=" << gap.expected
<< " received=" << gap.received << "\n";
}
}
Build
On Linux or macOS:
g++ -O2 -std=c++20 -o gap_detector main.cpp
./gap_detector
On Windows:
cl /O2 /std:c++20 main.cpp
.\main.exe
Expected output:
gap expected=1004 received=1007
gap expected=1009 received=1011
Why this tiny exercise matters
Because it forces the right kind of thinking:
- deterministic state update
- honest sequencing
- replay before theory
- bounded, measurable behavior
That is already more HFT than a surprising number of conference slides.
If you want to make the exercise more realistic, add timestamps, late-arriving packets, and venue-specific session resets. What matters is not making the code more theatrical. What matters is building the reflex that every claim about data flow should be testable, replayable, and small enough to explain.
Test Tasks for Enthusiasts
- Port the same detector to Rust and compare not benchmark vanity, but boundary clarity, dependency friction, and how easily each version fits your existing tooling.
- Extend the replay so that missing packets can later arrive out of order, then decide whether the detector should buffer, reject, or flag them.
- Add timing and measure the difference between a vector-backed replay and a ring-buffer-backed replay.
- Introduce one unnecessary allocation on the hot path and measure how quickly a "small" decision starts contaminating the result.
- Add a logging branch inside
on_packetand watch how fast observability becomes sabotage when it is placed carelessly.
Summary
The real C++ and Rust conversation in HFT is not about which language deserves the nicer mythology. It is about which parts of the system need direct control, which parts benefit from stronger defaults, and which boundaries can be made honest enough to support hybrid design without delusion.
C++ still dominates the hottest HFT paths because the domain rewards control over memory layout, queueing, wire behavior, profiling, replay, and integration with a mature low-latency ecosystem. Rust is useful where correctness, explicitness, and maintainability create more value than additional ecosystem friction costs. Both can belong in a serious stack. The adult move is to decide where, and to let evidence rather than language fandom keep score.
Teams that get this right do something very unglamorous and very effective. They stop asking which language will save them and start asking which boundary deserves what kind of discipline. That shift sounds modest, but it is the difference between architecture as branding and architecture as operational truth. In HFT, truth usually ages better.
References
- NASDAQ TotalView-ITCH specification: https://nasdaqtrader.com/content/technicalsupport/specifications/dataproducts/NQTVITCHSpecification.pdf
- FIX Trading Community standards: https://www.fixtrading.org/standards/
- DPDK documentation: https://doc.dpdk.org/guides/
- Linux timestamping documentation: https://docs.kernel.org/networking/timestamping.html
- Brendan Gregg on Flame Graphs: https://www.brendangregg.com/flamegraphs.html
- The Rust Performance Book: https://nnethercote.github.io/perf-book/