C++, Rust, and Decentralized Crypto Exchanges: Applicability and Efficiency
Introduction
Language arguments become especially misleading in crypto because the systems themselves are so easy to misdescribe. People say "build a DEX" as if a decentralized exchange were one executable with one latency profile, one trust model, and one kind of failure. In reality a serious DEX is a layered organism. It may include on-chain logic, validator or node interactions, block-building awareness, mempool monitoring, market-data collection, state simulation, pricing, routing, risk checks, operator dashboards, and sometimes order-book or matching-adjacent services that look suspiciously like traditional exchange infrastructure wearing blockchain vocabulary.
Once we acknowledge that layered reality, the argument between C++ and Rust becomes calmer and much more useful. The right question is not which language deserves the whole architecture as a point of honor. The right question is which layers benefit from Rust's safety and ecosystem fit, which layers still reward C++'s low-level performance control, and where hybrid design stops being compromise and starts being simple good sense.
That framing matters because decentralized exchange systems live under mixed pressures. Some layers are punished hardest for correctness failures, auditability issues, and unsafe state transitions. Other layers are punished for latency, throughput, and the inability to evaluate opportunities quickly enough. Still others are operational services where the real cost is long-term maintenance and team velocity. One language can be excellent for one of those burdens and merely adequate for another. Mature architecture begins when we admit that openly.
A DEX Is a Stack, Not an Identity Statement
The first and most important correction is conceptual. A DEX is not one thing. An EVM-oriented AMM protocol, a Solana-native program ecosystem, an app-chain perpetuals exchange, and a searcher system reacting to market conditions all deserve different engineering instincts. On-chain AMM logic lives inside one set of constraints. Off-chain simulators and route evaluators live inside another. Order-book-like components or high-frequency search infrastructure may look, from a systems perspective, much closer to classic exchange software than to ordinary web application development.
This is why language debates go astray so quickly. An engineer points at Solana and correctly observes that Rust is the natural path for program development there. Another points at a latency-sensitive routing or simulation engine and correctly observes that C++ is still a brutally strong choice. Both are right in context. The problem begins when each observation is inflated into a total theory of the entire stack.
A useful mental reset is to ask, for each subsystem, what kind of pain it is punished for. If a component is wrong, is the pain primarily public correctness failure? Is it private operational cost? Is it inability to respond to rapidly changing state before the opportunity closes? Is it audit burden, hiring burden, or infrastructure burden? Different layers answer these questions differently, which is why mature DEX systems often end up linguistically mixed even when public debates crave purity.
Where Rust Rightly Takes the Lead
Rust earns its place most naturally where state transitions, safety discipline, and ecosystem fit dominate the architecture. In Rust-first blockchain environments such as Solana, that is not a marginal advantage. It is the center of gravity. The language is not merely available there; it is surrounded by frameworks, examples, security habits, and tooling that help protocol teams move in the grain of the ecosystem rather than against it. For on-chain programs, that fit matters more than abstract language comparison. The best language on paper is often the worse language if every serious operational path around it expects something else.
Rust is also attractive in greenfield services surrounding a DEX when the main enemy is not ultra-low latency but long-term correctness and maintainability. Control-plane services, coordination layers, and certain protocol-facing tools may genuinely benefit from the discipline Rust encourages. The compiler catches categories of mistakes that would otherwise demand process, vigilance, and review culture to control in C++. That is not a romantic claim. It is a practical one. Teams with strong Rust talent can reduce some classes of risk early and keep service boundaries calmer over time.
A useful counterexample keeps this grounded. Teams sometimes infer from Rust's strength in chain-native work that every surrounding off-chain subsystem should also be Rust by default. But that only follows if the surrounding systems have the same dominant pain. A hot-path simulator or search engine that repeatedly evaluates market state under tight timing pressure does not stop being a performance-sensitive native system just because it serves a crypto product. The chain may be Rust-shaped while the surrounding execution path remains very much C++ shaped.
Where C++ Still Earns Its Keep
C++ becomes difficult to replace wherever a DEX starts behaving less like an application platform and more like exchange infrastructure. Market-data ingestion, mempool listening, normalization pipelines, route evaluation, state simulation, arbitrage search, liquidation engines, and order-book-adjacent services all share a common property: they perform repeated, low-level work under pressure, and that work often sits close to memory layout, allocation strategy, parser efficiency, queue behavior, or CPU predictability.
This is where C++'s long history in systems and trading continues to matter. The language gives engineers direct control over data structures, threading models, object lifetime, custom allocators, vector-friendly layouts, and performance tooling that has been battle-tested in exactly these kinds of environments. It also benefits from an older and denser ecosystem of examples for high-performance networked systems, simulators, parsers, native gateways, and hardware-conscious code. In an era when AI assistants are being asked to help with those problems too, that density compounds the advantage.
Consider a searcher that listens to market signals, simulates paths, and decides whether an opportunity is worth chasing. The interesting cost is rarely one formula in isolation. The interesting cost is the repeated, stateful use of many formulas surrounded by ingestion, decoding, routing, and decision logic. A few avoidable copies, one badly placed lock, or an undisciplined queue can shift the economics of the whole path. C++ is not magical here, but it gives engineers a deeply familiar language for asking the machine exact questions. In systems that live and die by repetition under time pressure, that still matters.
Economics Changes the Language Answer
One reason these debates become overheated is that engineers speak as if the win condition were elegance. In DEX systems the win condition is usually economic. Latency matters because missed opportunities have a cost. Efficiency matters because repeated simulation at scale has a cost. Safety matters because incorrect state transitions have a cost. Operational simplicity matters because a system that constantly frightens its operators has a cost. Once the argument is stated in those terms, language choice stops being symbolic and becomes financial.
Rust often pays for itself where the largest future cost would come from correctness failures in hard stateful logic or from maintaining complex services without enough structural discipline. C++ often pays for itself where the largest future cost would come from hot-path inefficiency, too much abstraction in repeated computation, or the difficulty of integrating with high-performance native infrastructure. A sensible team asks which cost will dominate over the life of the subsystem and chooses accordingly.
This perspective also helps with one common confusion: settlement speed and execution-path speed are not the same thing. A blockchain may have one set of timing characteristics at the protocol level while off-chain systems surrounding it live in a completely different latency world. Slow on-chain settlement does not make fast off-chain evaluation irrelevant. In fact, when opportunities are contested, off-chain speed can become even more valuable because it shapes who reacts, who prices accurately, and who submits a useful action first. Engineers who flatten these two timing domains into one concept called speed usually end up misplacing effort.
Hybrid Architecture Is Often the Adult Answer
Many of the most serious DEX architectures become easier to reason about once hybrid design is allowed to be respectable. On-chain logic can live in the language and framework environment that the chain expects. Control-plane and product services can choose the language that keeps maintenance sane. Hot-path simulation, routing, market-data processing, or matching-adjacent components can stay close to the performance traditions that make them easier to tune and verify. The result is not an ideological compromise. It is a system where each part is allowed to optimize for its real burden.
This does require maturity. Hybrid systems are only healthy when boundaries are explicit. Teams need clear interfaces, narrow responsibility splits, and honesty about where complexity belongs. But that is true regardless of language. A one-language architecture with confused boundaries is not simpler than a two-language architecture with clean ones. Sometimes it is simply a single-language expression of the same confusion.
There is also a staffing dimension here. Teams often imagine they must choose one language because hiring across multiple native domains feels difficult. That concern is understandable, but it can become an excuse for architectural laziness. A better question is whether the most performance-sensitive layer truly needs its own language or whether the profiler has not yet justified that cost. Some teams should absolutely stay mostly in Rust and only introduce C++ when a hot path has earned it. Others already have deep C++ expertise and would harm themselves by forcing everything into a Rust-shaped workflow that does not match their strongest systems instincts. Context again matters more than prestige.
What AI-Assisted Engineering Changes
The arrival of AI coding systems actually strengthens the case for contextual language choice rather than weakening it. In Rust-first blockchain ecosystems, agents can help with framework-aware scaffolding, routine service code, and some categories of refactor more comfortably than before. But in low-level, performance-heavy native subsystems, the balance still tilts toward C++ for a simple reason: public code, public tooling, and public integration examples are far denser there. Agents currently have more historical material from which to produce useful drafts for the kinds of hot-path infrastructure DEX systems often need.
This does not mean AI makes C++ universally superior. It means the old ecosystem gravity is now amplified by a new tool. When an assistant helps debug a CMake integration, suggest a queue redesign, improve a parser, or draft a benchmark for a simulation loop, it benefits from the deep native memory of the public C++ world. When an assistant works inside a Rust-first on-chain environment, the opposite can be true. The language decision still belongs to the workload, but the AI era makes environmental density even more consequential than before.
My Practical Recommendation
If you are building chain-native programs in a Rust-first ecosystem, do not fight the terrain for the sake of language rhetoric. Let Rust lead where it is already the natural home of correctness, tooling, and community practice. If you are building off-chain infrastructure that behaves like performance-sensitive exchange engineering, do not abandon C++ merely because the product domain is crypto. Let C++ do the work it still does exceptionally well: fast ingestion, repeated simulation, tight routing logic, and low-level systems control.
And if your architecture truly spans both worlds, embrace that fact without embarrassment. Good engineering is not made purer by pretending every component suffers from the same kind of failure. It is made stronger by assigning each component a language that respects the physics of its actual job.
There is a quiet optimism in approaching the problem this way. It reminds engineers that architecture can be calmer than public discourse. We do not have to choose one language to win the argument forever. We only have to choose the right tool for the next honest layer of the system. That is a much more profitable kind of intelligence.
Hands-On Lab: Build a tiny AMM route evaluator
Let us build something small enough to understand and real enough to touch.
The goal is not to recreate Uniswap. The goal is to feel how quickly DEX work becomes a matter of repeated simulation and comparison.
main.cpp
#include <cmath>
#include <iomanip>
#include <iostream>
#include <string>
#include <vector>
struct Pool {
std::string name;
double reserve_in;
double reserve_out;
double fee; // 0.003 for 0.3%
};
double swap_out(const Pool& p, double amount_in) {
const double effective_in = amount_in * (1.0 - p.fee);
return (effective_in * p.reserve_out) / (p.reserve_in + effective_in);
}
double two_hop(const Pool& a, const Pool& b, double amount_in) {
const double mid = swap_out(a, amount_in);
return swap_out(b, mid);
}
int main() {
Pool eth_usdc_a{"ETH/USDC pool A", 500.0, 1750000.0, 0.003};
Pool eth_usdc_b{"ETH/USDC pool B", 650.0, 2262000.0, 0.0005};
Pool usdc_dai{"USDC/DAI stable pool", 900000.0, 901200.0, 0.0001};
const double trade_eth = 4.0;
const double direct_a = swap_out(eth_usdc_a, trade_eth);
const double direct_b = swap_out(eth_usdc_b, trade_eth);
const double routed = two_hop(eth_usdc_b, usdc_dai, trade_eth);
std::cout << std::fixed << std::setprecision(4);
std::cout << "Input: " << trade_eth << " ETH\n";
std::cout << "Direct via " << eth_usdc_a.name << ": " << direct_a << " USDC\n";
std::cout << "Direct via " << eth_usdc_b.name << ": " << direct_b << " USDC\n";
std::cout << "Two-hop via " << eth_usdc_b.name << " -> " << usdc_dai.name
<< ": " << routed << " DAI\n";
if (direct_b > direct_a) {
std::cout << "Best direct route: " << eth_usdc_b.name << "\n";
} else {
std::cout << "Best direct route: " << eth_usdc_a.name << "\n";
}
}
Build
On Linux or macOS:
g++ -O2 -std=c++20 -o amm_router main.cpp
./amm_router
On Windows:
cl /O2 /std:c++20 main.cpp
.\main.exe
Why this matters
Even this tiny program already hints at the real shape of off-chain DEX work:
- repeated path evaluation
- fee-aware comparison
- state-dependent output
- constant tension between correctness and speed
Scale this up to hundreds of pools, frequent state updates, and adversarial timing pressure, and you begin to see why language choice stops being abstract very quickly.
Test Tasks for Enthusiasts
- Add slippage tolerance and reject routes whose effective output falls below a configured threshold.
- Extend the program to compare five or ten pools instead of two and profile where the time goes.
- Add a loop that re-evaluates the route one million times with slightly changing reserves and measure how a "toy" router starts to resemble a real hot path.
- Replace floating-point output formatting with structured numeric logging and observe how much "non-math" work appears around the actual route logic.
- Add a second version in Rust or another language and compare not only raw runtime, but also how comfortable the language feels once the simulation loop becomes the center of the work.
This is a good exercise because it reveals something subtle: in exchange software, the interesting difficulty often lies not in a single formula, but in the repeated, stateful, latency-sensitive use of many ordinary formulas at once.
Summary
C++ and Rust both belong in decentralized exchange engineering, but they belong there for different reasons. Rust earns trust in ecosystems and layers where state safety, auditability, and chain-native workflow are central. C++ earns trust in layers where the work starts to look like exchange infrastructure again: repeated simulation, market-data processing, routing, search, and other hot-path systems that reward tight control over memory, scheduling, and performance verification.
The most useful question is therefore not which language wins the whole stack. It is which layer we are actually designing and what kind of failure that layer can least afford. Once that question is asked honestly, the architecture usually becomes much clearer, and the argument becomes less ideological. A well-designed DEX is rarely a monument to language purity. It is a practical arrangement of components, each written in the language that best respects the burden it carries.
References
- Uniswap v3 whitepaper: https://uniswap.org/whitepaper-v3.pdf
- Uniswap v3 core repository: https://github.com/Uniswap/v3-core
- Ethereum.org MEV documentation: https://ethereum.org/developers/docs/mev/
- Solana programs overview: https://solana.com/docs/core/programs
- Solana Rust program development: https://solana.com/docs/programs/rust
- Anchor documentation: https://www.anchor-lang.com/docs
- dYdX Chain documentation: https://docs.dydx.exchange/
- dYdX integration documentation: https://docs.dydx.xyz/
- dYdX on off-chain order books with on-chain settlement: https://integral.dydx.exchange/dydx-closes-10m-series-b-investment/
- Cosmos SDK documentation: https://docs.cosmos.network/
What This Looks Like When the System Is Already Under Pressure
Language choice in dex infrastructure tends to become urgent at the exact moment a team was hoping for a quieter quarter. A feature is already in front of customers, or a platform already carries internal dependence, and the system has chosen that particular week to reveal that its elegant theory and its runtime behavior have been politely living separate lives. This is why so much serious engineering work starts not with invention but with reconciliation. The team needs to reconcile what it believes the system does with what the system actually does under load, under change, and under the sort of deadlines that make everybody slightly more creative and slightly less wise.
In crypto systems engineering, the cases that matter most are usually searcher and simulator backends, latency-sensitive routing services, and off-chain risk and settlement infrastructure. Those are not only technical situations. They are budget situations, trust situations, roadmap situations, and in some companies reputation situations. A technical problem becomes politically larger the moment several teams depend on it and nobody can quite explain why it still behaves like a raccoon inside the walls: noisy at night, hard to locate, and expensive to ignore.
That is why we recommend reading the problem through the lens of operating pressure, not only through the lens of elegance. A design can be theoretically beautiful and operationally ruinous. Another design can be almost boring and yet carry the product forward for years because it is measurable, repairable, and honest about its tradeoffs. Serious engineers learn to prefer the second category. It makes for fewer epic speeches, but also fewer emergency retrospectives where everybody speaks in the passive voice and nobody remembers who approved the shortcut.
Practices That Consistently Age Well
The first durable practice is to keep one representative path under constant measurement. Teams often collect too much vague telemetry and too little decision-quality signal. Pick the path that genuinely matters, measure it repeatedly, and refuse to let the discussion drift into decorative storytelling. In work around language choice in DEX infrastructure, the useful measures are usually hot-path determinism, operational clarity, interop surface, and simulation realism. Once those are visible, the rest of the decisions become more human and less mystical.
The second durable practice is to separate proof from promise. Engineers are often pressured to say that a direction is right before the system has earned that conclusion. Resist that pressure. Build a narrow proof first, especially when the topic is close to customers or money. A small verified improvement has more commercial value than a large unverified ambition. This sounds obvious until a quarter-end review turns a hypothesis into a deadline and the whole organization starts treating optimism like a scheduling artifact.
The third durable practice is to write recommendations in the language of ownership. A paragraph that says "improve performance" or "strengthen boundaries" is emotionally pleasant and operationally useless. A paragraph that says who changes what, in which order, with which rollback condition, is the one that actually survives Monday morning. This is where a lot of technical writing fails. It wants to sound advanced more than it wants to be schedulable.
Counterexamples That Save Time
One of the most common counterexamples looks like this: the team has a sharp local success, assumes the system is now understood, and then scales the idea into a much more demanding environment without upgrading the measurement discipline. That is the engineering equivalent of learning to swim in a hotel pool and then giving a confident TED talk about weather at sea. Water is water right up until it is not.
Another counterexample is tool inflation. A new profiler, a new runtime, a new dashboard, a new agent, a new layer of automation, a new wrapper that promises to harmonize the old wrapper. None of these things are inherently bad. The problem is what happens when they are asked to compensate for a boundary nobody has named clearly. The system then becomes more instrumented, more impressive, and only occasionally more understandable. Buyers feel this very quickly. They may not phrase it that way, but they can smell when a stack has become an expensive substitute for a decision.
The third counterexample is treating human review as a failure of automation. In real systems, human review is often the control that keeps automation commercially acceptable. Mature teams know where to automate aggressively and where to keep approval or interpretation visible. Immature teams want the machine to do everything because "everything" sounds efficient in a slide. Then the first serious incident arrives, and suddenly manual review is rediscovered with the sincerity of a conversion experience.
A Delivery Pattern We Recommend
If the work is being done well, the first deliverable should already reduce stress. Not because the system is fully fixed, but because the team finally has a technical read strong enough to stop arguing in circles. After that, the next bounded implementation should improve one crucial path, and the retest should make the direction legible to both engineering and leadership. That sequence matters more than the exact tool choice because it is what turns technical skill into forward motion.
In practical terms, we recommend a narrow first cycle: gather artifacts, produce one hard diagnosis, ship one bounded change, retest the real path, and write the next decision in plain language. Plain language matters. A buyer rarely regrets clarity. A buyer often regrets being impressed before the receipts arrive.
This is also where tone matters. Strong technical work should sound like it has met production before. Calm, precise, and slightly amused by hype rather than nourished by it. That tone is not cosmetic. It signals that the team understands the old truth of systems engineering: machines are fast, roadmaps are fragile, and sooner or later the bill arrives for every assumption that was allowed to remain poetic.
The Checklist We Would Use Before Calling This Ready
In crypto systems engineering, readiness is not a mood. It is a checklist with consequences. Before we call work around language choice in DEX infrastructure ready for a wider rollout, we want a few things to be boring in the best possible way. We want one path that behaves predictably under representative load. We want one set of measurements that does not contradict itself. We want the team to know where the boundary sits and what it would mean to break it. And we want the output of the work to be clear enough that somebody outside the implementation room can still make a sound decision from it.
That checklist usually touches hot-path determinism, operational clarity, interop surface, and simulation realism. If the numbers move in the right direction but the team still cannot explain the system without improvising, the work is not ready. If the architecture sounds impressive but cannot survive a modest counterexample from the field, the work is not ready. If the implementation exists but the rollback story sounds like a prayer with timestamps, the work is not ready. None of these are philosophical objections. They are simply the forms in which expensive surprises tend to introduce themselves.
This is also where teams discover whether they were solving the real problem or merely rehearsing competence in its general vicinity. A great many technical efforts feel successful right up until somebody asks for repeatability, production evidence, or a decision that will affect budget. At that moment the weak work goes blurry and the strong work becomes strangely plain. Plain is good. Plain usually means the system has stopped relying on charisma.
How We Recommend Talking About the Result
The final explanation should be brief enough to survive a leadership meeting and concrete enough to survive an engineering review. That is harder than it sounds. Overly technical language hides sequence. Overly simplified language hides risk. The right middle ground is to describe the path, the evidence, the bounded change, and the next recommended step in a way that sounds calm rather than triumphant.
We recommend a structure like this. First, say what path was evaluated and why it mattered. Second, say what was wrong or uncertain about that path. Third, say what was changed, measured, or validated. Fourth, say what remains unresolved and what the next investment would buy. That structure works because it respects both engineering and buying behavior. Engineers want specifics. Buyers want sequencing. Everybody wants fewer surprises, even the people who pretend they enjoy them.
The hidden benefit of speaking this way is cultural. Teams that explain technical work clearly usually execute it more clearly too. They stop treating ambiguity as sophistication. They become harder to impress with jargon and easier to trust with difficult systems. That is not only good writing. It is one of the more underrated forms of engineering maturity.
What We Would Still Refuse to Fake
Even after the system improves, there are things we would still refuse to fake in crypto systems engineering. We would not fake confidence where measurement is weak. We would not fake simplicity where the boundary is still genuinely hard. We would not fake operational readiness just because the demo looks calmer than it did two weeks ago. Mature engineering knows that some uncertainty must be reduced and some uncertainty must merely be named honestly. Confusing those two jobs is how respectable projects become expensive parables.
The same rule applies to decisions around language choice in DEX infrastructure. If a team still lacks a reproducible benchmark, a trustworthy rollback path, or a clear owner for the critical interface, then the most useful output may be a sharper no or a narrower next step rather than a bigger promise. That is not caution for its own sake. It is what keeps technical work aligned with the reality it is meant to improve.
There is a strange relief in working this way. Once the system no longer depends on optimistic storytelling, the engineering conversation gets simpler. Not easier, always, but simpler. And in production that often counts as a minor form of grace.
Additional Notes on Dex infrastructure planning
A good language split in DEX infrastructure usually looks modest on paper. One language owns the place where predictability, legacy leverage, or raw systems familiarity matters most. The other owns the place where boundary discipline and newer component isolation make the delivery story healthier. The mistake is trying to turn language choice into ideology. Trading systems do not care about ideology. They care about missed packets, unstable queues, false simulation confidence, and the invoice for pretending otherwise.
That is why we recommend architecture maps that show exactly where the languages meet, how those seams are tested, and which operational metrics belong to each side. If a mixed C++/Rust stack cannot be explained to operations in one calm diagram, it probably is not ready. And if it can be explained clearly, the mixed stack often stops looking exotic at all. It simply looks like engineering that was willing to choose fit over fashion.
Field Notes from a Real Technical Review
In C++ systems delivery, the work becomes serious when the demo meets real delivery, real users, and real operating cost. That is the moment where a tidy idea starts behaving like a system, and systems have a famously dry sense of humor. They do not care how elegant the kickoff deck looked. They care about boundaries, failure modes, rollout paths, and whether anyone can explain the next step without inventing a new mythology around the stack.
For C++, Rust, and Decentralized Crypto Exchanges: Applicability and Efficiency, the practical question is not only whether the technique is interesting. The practical question is whether it creates a stronger delivery path for a buyer who already has pressure on a roadmap, a platform, or a security review. That buyer does not need a lecture polished into fog. They need a technical read they can use.
What we would inspect first
We would begin with one representative path: native inference, profiling, HFT paths, DEX systems, and C++/Rust modernization choices. That path should be narrow enough to measure and broad enough to expose the truth. The first pass should capture allocation behavior, p99 latency, profile evidence, ABI friction, and release confidence. If those signals are unavailable, the project is still mostly opinion wearing a lab coat, and opinion has a long history of billing itself as strategy.
The first useful artifact is a native-systems read with benchmarks, profiling evidence, and a scoped implementation plan. It should show the system as it behaves, not as everybody hoped it would behave in the planning meeting. A trace, a replay, a small benchmark, a policy matrix, a parser fixture, or a repeatable test often tells the story faster than another abstract architecture discussion. Good artifacts are wonderfully rude. They interrupt wishful thinking.
A counterexample that saves time
The expensive mistake is to respond with a solution larger than the first useful proof. A team sees risk or delay and immediately reaches for a new platform, a rewrite, a sweeping refactor, or a procurement-friendly dashboard with a name that sounds like it does yoga. Sometimes that scale is justified. Very often it is a way to postpone measurement.
The better move is smaller and sharper. Name the boundary. Capture evidence. Change one important thing. Retest the same path. Then decide whether the next investment deserves to be larger. This rhythm is less dramatic than a transformation program, but it tends to survive contact with budgets, release calendars, and production incidents.
The delivery pattern we recommend
The most reliable pattern has four steps. First, collect representative artifacts. Second, turn those artifacts into one hard technical diagnosis. Third, ship one bounded change or prototype. Fourth, retest with the same measurement frame and document the next decision in plain language. In this class of work, CMake fixtures, profiling harnesses, small native repros, and compiler/runtime notes are usually more valuable than another meeting about general direction.
Plain language matters. A buyer should be able to read the output and understand what changed, what remains risky, what can wait, and what the next step would buy. If the recommendation cannot be scheduled, tested, or assigned to an owner, it is still too decorative. Decorative technical writing is pleasant, but production systems are not known for rewarding pleasantness.
How to judge whether the result helped
For C++, Rust, and Decentralized Crypto Exchanges: Applicability and Efficiency, the result should improve at least one of three things: delivery speed, system confidence, or commercial readiness. If it improves none of those, the team may have learned something, but the buyer has not yet received a useful result. That distinction matters. Learning is noble. A paid engagement should also move the system.
The strongest outcome is not always the biggest build. Sometimes it is a narrower roadmap, a refusal to automate a dangerous path, a better boundary around a model, a cleaner native integration, a measured proof that a rewrite is not needed yet, or a short remediation list that leadership can actually fund. Serious engineering is a sequence of better decisions, not a costume contest for tools.
How SToFU would approach it
SToFU would treat this as a delivery problem first and a technology problem second. We would bring the relevant engineering depth, but we would keep the engagement anchored to evidence: the path, the boundary, the risk, the measurement, and the next change worth making. The point is not to make hard work sound easy. The point is to make the next serious move clear enough to execute.
That is the part buyers usually value most. They can hire opinions anywhere. What they need is a team that can inspect the system, name the real constraint, build or validate the right slice, and leave behind artifacts that reduce confusion after the call ends. In a noisy market, clarity is not a soft skill. It is infrastructure.