Why C++ Still Beats Rust in the AI Era

C++

Why C++ Still Beats Rust in the AI Era

Why C++ Still Beats Rust in the AI Era

Introduction

Arguments about programming languages often become moral theater long before they become engineering. One language is described as clean, the other as burdened. One is imagined as the future, the other as baggage from the past. These stories are emotionally satisfying because they make history feel neat. They also mislead teams that have to ship systems under deadlines, budgets, integration constraints, and now an additional force that did not exist in the same way ten years ago: AI coding assistants and agents.

Once code generation becomes part of day-to-day delivery, the question changes. It is no longer only "Which language is elegant?" or "Which language is safe by default?" The harder and more practical question becomes this: if a team expects AI systems to help write, refactor, benchmark, integrate, and debug production code, which language currently gives those systems the richest environment in which to be useful? My answer remains C++, and the heart of the argument is neither nostalgia nor machismo. It is density.

C++ still sits inside a denser world of public code, deployed infrastructure, vendor tooling, platform examples, optimization folklore, and real production scars than Rust does. AI models learn from that density. They do not only learn syntax. They learn how people stitched together large systems, how build files evolved, how ugly integrations were made to work, how low-level bugs were diagnosed, and how performance-sensitive code was actually written in anger rather than in theory. When those models are later asked to help with real engineering, the shape of that historical memory matters.

This does not mean Rust is weak, unserious, or irrelevant. On the contrary, Rust has brought healthy pressure into systems programming. It made memory safety impossible to ignore, improved the tone of many engineering conversations, and produced genuinely strong tooling and libraries. But the existence of Rust's strengths does not automatically erase C++'s current advantages in AI-assisted delivery. Mature engineering often requires holding both truths at once.

Evidence First, Slogans Later

A careful argument begins by separating what can be publicly observed from what must be inferred. Public datasets used in code-model research, such as The Stack, show substantially more C++ than Rust. Public developer surveys and GitHub language trends continue to show broader absolute use of C++ across industry. Public AI infrastructure, from vendor SDKs to optimized inference runtimes to low-level math libraries, still exposes a world that is deeply C and C++ shaped. Public benchmarking efforts such as CRUST-Bench also suggest that current models still struggle to consistently generate safe, idiomatic Rust in the strong sense that Rust communities value.

From those facts we make an inference, not a dogma. The inference is that AI systems are currently more likely to generate production-useful, integratable, and optimizable C++ in many systems domains because the surrounding environment for C++ is richer. This is not magic. It is exposure combined with feedback. A language with more repositories, more build scripts, more hardware-facing examples, more vendor integrations, more public bug fixes, more performance investigations, and more production war stories offers a model more ways to be approximately right before a human engineer even begins correcting it.

This point is often resisted because it sounds ungenerous to the newer language. But it is not an insult to Rust to say that it has had less time to accumulate public engineering sediment. C++ has been embedded for decades in operating systems, browsers, databases, media stacks, security tools, game engines, telecom, scientific computing, embedded products, and financial systems. Rust has grown quickly and admirably, but growth is not the same thing as geological depth. AI models absorb depth.

Why Corpus Size Matters More Than People Admit

Engineers sometimes treat training data volume as if it were a crude talking point. In practice it matters in a much more human way. An AI agent working in a production codebase is usually not inventing a perfect algorithm from first principles. It is doing something messier. It may be updating a CMake file, adapting to a compiler complaint on one platform, replacing a hot-path container, wrapping a vendor API, converting image or tensor layouts, fixing an ABI mismatch, or making an old native subsystem slightly less painful without breaking everything around it.

Those tasks reward familiarity with ordinary, imperfect, lived code. The agent benefits from having seen not just clean textbook examples but thousands of real attempts to solve adjacent problems. C++ gives models far more of that material. There is more modern C++, more legacy C++ being slowly repaired, more benchmark-driven C++, more embarrassing C++ that still somehow runs important businesses, and more examples of people navigating exactly the kind of compromises that real systems demand.

This is why "messy production C++" is still valuable training data. Some engineers hear that phrase and imagine it weakens the case. In reality it strengthens it. Production systems are not composed entirely of elegant greenfield modules. They include legacy interfaces, odd ABI assumptions, platform conditionals, hardware quirks, partial migrations, and code that survived because it was useful before it was beautiful. If an AI system has seen many more examples of that landscape in C++, it is simply better prepared to help inside such a landscape.

A counterexample is worth stating openly. If a team is building a small greenfield service with strong Rust expertise, clear safety requirements, modest integration needs, and no heavy native ecosystem around it, Rust may be a better local choice. In that situation the argument from corpus size is less decisive because the surrounding engineering context is simpler and the human team can keep the system inside a narrower band of complexity. The point is not that C++ wins every argument. The point is that as the problem becomes older, stranger, more performance-sensitive, and more entangled with existing native infrastructure, C++ increasingly becomes the easier language for AI systems to help with effectively.

The AI Infrastructure World Is Still C++ Shaped

Even if we ignored training data volume entirely, there would still be a second force pulling the default toward C++: the infrastructure beneath modern AI products remains strongly native. CUDA, optimized math libraries, ONNX Runtime internals, oneDNN, OpenVINO, tokenizer implementations, multimedia preprocessing pipelines, model-serving accelerators, hardware vendor SDKs, and many deployment runtimes either are written in C or C++ or expose their most serious interfaces there. This does not mean Rust cannot call into them. It means the shortest path through the landscape is still usually a C or C++ path.

That matters because AI coding agents are not useful in a vacuum. They are useful inside dependency graphs. A model that is asked to help integrate a runtime, debug a build, tune a hot path, or reason about ownership across a vendor SDK boundary is advantaged when it has seen many adjacent examples in the same language family. C++ still benefits from that environmental familiarity more than Rust in most performance-critical AI infrastructure work.

This is also where the conversation about feedback loops becomes important. AI-generated code only becomes truly valuable when humans can verify it quickly. C++ often gives teams richer local verification in these domains because the ecosystem around benchmarking, profiling, replay, sanitizers, hardware counters, and low-level diagnostics is so mature. When an agent proposes a change in a C++ inference path, a team can often compile it, profile it, inspect the allocation behavior, compare latency distributions, and iterate rapidly. Rust absolutely has strong tooling too, but in many AI-adjacent native systems the combined density of libraries, examples, profilers, and existing practice still makes C++ the easier place to run tight human-in-the-loop correction loops.

Why Teams Often Move Faster With C++ Even When Rust Looks Cleaner

This is the point that tends to offend ideology, because it sounds impolite to cleanliness. Rust often looks cleaner on the whiteboard. Ownership is explicit. The compiler guards important mistakes. The culture around correctness is admirable. But production speed is not identical to language elegance. Real delivery speed emerges from the whole loop: existing codebase, available libraries, talent pool, debugging tools, deployment constraints, AI assistance quality, and the cost of making one more change next month.

C++ currently wins that broader loop in many AI-era systems because teams can ask more of the surrounding world without leaving the language. They can integrate old native libraries, attach profilers that were built with native performance work in mind, tune allocators, exploit platform-specific facilities, and draw from a much larger body of public examples when something goes wrong. AI assistants benefit from exactly the same reality. When the world around the model is dense and well-traveled, the model's rough drafts improve faster.

Imagine two teams building a latency-sensitive inference service with some custom preprocessing, a complicated deployment matrix, and a need for repeated performance tuning. The Rust team may produce a smaller set of memory-safety bugs, and that is not trivial. But if the C++ team can integrate the ecosystem more directly, get stronger AI suggestions in the actual codebase they have, and verify performance changes faster with mature native tooling, the overall delivery outcome may still favor C++. In business terms, that matters more than whether one language won a philosophical argument online.

A useful counterexample keeps us honest. If the dominant risk in a project is not integration or performance evolution but memory safety in a new service with relatively simple dependencies, Rust can absolutely create better organizational outcomes. The mistake is to take that truth and export it indiscriminately into every AI-adjacent systems problem. Languages win in contexts, not in sermons.

What Rust Still Gets Right

Rust deserves respect, and the argument for C++ is weaker when it caricatures Rust. Rust is excellent at making unsafe assumptions visible. It creates strong discipline around ownership and lifetimes. It is often a compelling choice for greenfield infrastructure where correctness and maintainability dominate over compatibility with an existing native world. In some teams, Rust also improves hiring clarity because the codebase itself enforces a certain kind of engineering seriousness.

It is also important to say plainly that C++ does not win by default just because it is older. Undisciplined C++ remains dangerous. If a team has weak review culture, no profiling habit, poor testing, and no respect for observability, then larger corpora and richer tooling will not save it. AI systems can amplify that chaos just as easily as they can accelerate good engineering. The real claim is narrower and more practical: given disciplined teams solving performance-sensitive, integration-heavy, AI-era systems problems, C++ is still the stronger default bet today because agents, tools, and ecosystem gravity all reinforce it.

This is why I prefer the phrase default bet rather than universal winner. A default bet is what you choose when the burden of proof has not yet shifted elsewhere. Rust can earn that shift in specific projects. But C++ still starts with more evidence in its favor whenever the work is deeply entangled with native AI infrastructure, low-level performance, long-lived production systems, or the sort of codebase AI agents have seen in vast public quantity.

A Practical Way to Decide

If the hot path is native, the dependency graph is native, the profiling story matters, and you expect AI assistants to help inside messy real production code, C++ deserves to be your first serious language discussion. If the system is greenfield, the safety case dominates, the surrounding ecosystem is already Rust-shaped, and the problem does not depend heavily on old native strata, Rust becomes more attractive. If the system contains both worlds, which many do, the mature answer is often hybrid architecture rather than tribal purity.

This framework calms the conversation because it returns the decision to work rather than identity. A native inference runtime inside an existing C++ platform is not the same problem as a new control-plane service. A low-latency media pipeline is not the same problem as a backend API. A model-serving edge component is not the same problem as a chain-native state-transition engine. Once we name the actual work, the language choice usually looks less ideological and more obvious.

There is also a human benefit to making the decision this way. Teams become more cooperative when they stop asking which language deserves admiration and start asking which language gives the current system the best chance of becoming reliable, intelligible, and improvable. AI assistance makes this even more important. Agents are powerful when they are embedded in a culture of verification, not when they are used to decorate language fandom with synthetic confidence.

The Real Opportunity

The deeper opportunity in the AI era is not merely that agents can write code. It is that they can now participate in the entire feedback loop around mature systems: reading old code, proposing edits, improving benchmarks, surfacing profiler clues, translating rough ideas into compilable experiments, and helping engineers move from suspicion to measurement faster than before. In that world, the language that benefits most is not necessarily the one with the nicest theoretical story. It is the one with the thickest web of public, practical, battle-tested reality.

Today, for a large class of serious systems problems, that language is still C++. And that is good news, not because the industry should stop learning from Rust, but because teams can use the huge body of existing native knowledge rather than pretending it vanished the moment AI arrived. The most productive posture is not triumphalism. It is gratitude. C++ accumulated decades of real engineering memory, and AI systems now make that memory easier to use. Wise teams will take advantage of it.

Hands-On Lab: Build and improve a native scoring pipeline

If an article about AI-era language choice contains no code, it risks becoming a sermon.

So let us build a small native C++ utility of the kind AI agents are constantly asked to improve in real companies: a text scoring pipeline that loads data, computes simple features, sorts the results, and prints the top rows.

It is modest on purpose. Most production engineering is modest.

main.cpp

#include <algorithm>
#include <chrono>
#include <cctype>
#include <fstream>
#include <iostream>
#include <string>
#include <string_view>
#include <vector>

struct Sample {
    std::string text;
    double score = 0.0;
};

static int count_digits(std::string_view s) {
    int n = 0;
    for (unsigned char c : s) {
        n += std::isdigit(c) ? 1 : 0;
    }
    return n;
}

static int count_upper(std::string_view s) {
    int n = 0;
    for (unsigned char c : s) {
        n += std::isupper(c) ? 1 : 0;
    }
    return n;
}

static int count_punct(std::string_view s) {
    int n = 0;
    for (unsigned char c : s) {
        n += std::ispunct(c) ? 1 : 0;
    }
    return n;
}

static double score_line(std::string_view s) {
    const auto len = static_cast<double>(s.size());
    const auto digits = static_cast<double>(count_digits(s));
    const auto upper = static_cast<double>(count_upper(s));
    const auto punct = static_cast<double>(count_punct(s));
    return len * 0.03 + digits * 0.7 + upper * 0.15 - punct * 0.05;
}

int main(int argc, char** argv) {
    if (argc < 2) {
        std::cerr << "usage: scorer <input-file>\n";
        return 1;
    }

    std::ifstream in(argv[1]);
    if (!in) {
        std::cerr << "cannot open input file\n";
        return 1;
    }

    std::vector<Sample> rows;
    rows.reserve(200000);

    std::string line;
    while (std::getline(in, line)) {
        rows.push_back({line, 0.0});
    }

    const auto t0 = std::chrono::steady_clock::now();
    for (auto& row : rows) {
        row.score = score_line(row.text);
    }
    std::sort(rows.begin(), rows.end(), [](const Sample& a, const Sample& b) {
        return a.score > b.score;
    });
    const auto t1 = std::chrono::steady_clock::now();

    const auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count();
    std::cout << "processed " << rows.size() << " rows in " << ms << " ms\n";

    const size_t limit = std::min<size_t>(5, rows.size());
    for (size_t i = 0; i < limit; ++i) {
        std::cout << rows[i].score << " | " << rows[i].text << "\n";
    }
}

Build

On Linux or macOS:

g++ -O2 -std=c++20 -o scorer main.cpp
./scorer sample.txt

On Windows with MSVC:

cl /O2 /std:c++20 main.cpp
.\main.exe sample.txt

Why this tiny program is useful

Because it is exactly the kind of code where AI-assisted engineering becomes tangible:

  • it is native
  • it touches strings and memory
  • it has a measurable runtime
  • it can be profiled
  • it can be improved incrementally

That is the real habitat of many C++ agents today: not grand demonstrations, but ordinary native programs that need to become better without being reinvented.

Test Tasks for Enthusiasts

If you want to turn the article into a practical exercise, try these:

  1. Ask your favorite coding agent to optimize the program without changing output. Inspect whether it reduces duplicate passes or unnecessary temporaries.
  2. Add separate timing for file loading, scoring, and sorting. Verify where the time really goes.
  3. Replace the input with one million lines and compare the quality of optimizations suggested by different agents.
  4. Port the utility to Rust and compare the experience honestly: what felt clearer, what felt heavier, and what surrounding tooling felt more mature for this exact task.
  5. Run the C++ version under a profiler and write down whether your first guess about the hotspot was actually right.

This is a small exercise, but that is precisely why it is useful. Most engineering debates become more truthful when they are forced to survive contact with a small real program.

Summary

Rust deserves the respect it receives. It raised the standard for safety conversations and gave systems programming a healthier set of defaults. But the AI era is not rewarding defaults alone. It is rewarding the language that sits at the center of the largest living corpus of real code, the deepest ecosystem of low-level integrations, the richest optimization culture, and the fastest practical loop from generated draft to measurable production result. Today that still describes C++ more strongly than Rust.

That does not make C++ morally superior, and it does not make Rust irrelevant. It simply means that, for many serious native systems problems, AI agents still have more useful ground beneath their feet when the target world is C++. Teams that understand this can make better decisions without drama. They can learn from Rust where Rust is strongest, and still use the immense accumulated memory of C++ where that memory is most economically valuable.

References

  1. GitHub Octoverse 2024: https://github.blog/news-insights/octoverse/octoverse-2024/
  2. GitHub Octoverse 2025: https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/
  3. Stack Overflow Developer Survey 2023: https://survey.stackoverflow.co/2023
  4. Stack Overflow Developer Survey 2025 Technology section: https://survey.stackoverflow.co/2025/technology/
  5. The Stack dataset card: https://huggingface.co/datasets/bigcode/the-stack
  6. The Stack paper: https://arxiv.org/abs/2211.15533
  7. ICLR 2025 paper on the impact of code data in pre-training: https://openreview.net/pdf?id=zSfeN1uAcx
  8. CRUST-Bench: A Comprehensive Benchmark for C-to-safe-Rust Transpilation: https://arxiv.org/abs/2504.15254
  9. CUDA C++ Programming Guide: https://docs.nvidia.com/cuda/cuda-c-programming-guide/
  10. ONNX Runtime C/C++ API: https://onnxruntime.ai/docs/api/c/index.html
  11. PyTorch C++ frontend documentation: https://docs.pytorch.org/cppdocs/frontend.html
  12. C++ Core Guidelines: https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
Philip P.

Philip P. – CTO

Focused on fintech system engineering, low-level development, HFT infrastructure and building PoC to production-grade systems.

Back to Blogs

Contact

Start the Conversation

A few clear lines are enough. Describe the system, the pressure, and the decision that is blocked. Or write directly to midgard@stofu.io.

01 What the system does
02 What hurts now
03 What decision is blocked
04 Optional: logs, specs, traces, diffs
0 / 10000

Text-based engineering files only: TXT, LOG, MD, CSV, JSON, YAML, XML, TOML, INI, CFG, CONF, SQL, DIFF, PATCH. Up to 3 files, 2 MB each, 6 MB total. Binary, script, and executable files are blocked.