Why C++ Still Beats Rust in the AI Era
Contents
- Introduction
- An important honesty note
- The corpus argument
- A historical comparison that still matters
- Why training data volume matters for AI coding agents
- The AI infrastructure stack is still deeply C++
- Why C++ gets better feedback loops in production
- What Rust gets right
- Where Rust is still a weaker bet for AI-assisted delivery
- Modern C++ is not the C++ of 1998
- What this looks like in real teams using agents
- A practical decision framework
- Examples and counterexamples in real AI delivery
- Where rhetoric misleads engineers
- Hands-On Lab: Build and improve a native scoring pipeline
- Test Tasks for Enthusiasts
- Summary
- References
Introduction
Hello friends!
This article is deliberately opinionated, but it is not meant to be careless.
The question is not whether Rust is a good language. It clearly is. Rust brought memory safety, ownership semantics, and a fresh systems-programming culture into the mainstream. That matters, and it will keep mattering.
But technical history has an odd habit of punishing neat moral narratives. The language that is cleaner is not always the language that dominates production. The language that is safer by default is not always the language that wins the most integrations. And the language that wins admiration is not always the language that receives the richest flow of patches, benchmarks, weird bug reports, compatibility hacks, and battle-tested operational knowledge.
The real question is different:
If you are building serious AI-era systems and you expect AI coding assistants or agents to help you ship production code, which language is the better default bet today: C++ or Rust?
My answer is C++.
Not because Rust is unserious. Not because memory safety stopped mattering. Not because hype should decide architecture. The reason is much more concrete:
AI coding systems are only as strong as the interaction between their training data, the surrounding ecosystem, and the feedback loops that production teams can apply. On those three axes, C++ still has a major advantage.
And yes, one of the biggest reasons is exactly the one many engineers now observe in practice:
AI systems have seen vastly more real, production, battle-tested C++ than real, production, battle-tested Rust.
An important honesty note
Before we go further, we should separate observable evidence from engineering judgment.
Here is what we can support publicly:
- Large public code corpora used for code-model research are heavily built from GitHub-scale source code datasets.
- In those public datasets, C++ appears in significantly larger volume than Rust.
- GitHub usage data and developer surveys still show broader real-world C++ usage than Rust usage, even though Rust is highly admired.
- Core AI infrastructure layers, vendor SDKs, performance libraries, and deployment runtimes are still deeply rooted in C and C++.
- Public benchmarks such as CRUST-Bench show that generating or translating into safe, idiomatic Rust remains difficult for current models.
Here is the engineering judgment I am making from those facts:
Because the public C++ corpus is larger, the infrastructure ecosystem is denser, and the performance toolchain is more mature, AI agents are currently more likely to generate useful, integratable, and optimizable production code in C++ than in Rust for many performance-critical domains.
That conclusion is an inference. It is not a universal mathematical law. There is no honest way to claim that every agent will always produce faster C++ than Rust for every task.
But if we care about practical defaults rather than slogans, the inference is strong.
The corpus argument
Let us start with the simplest and most important point: live code volume matters.
The Stack, one of the best-known public permissively licensed code datasets used in code-model research, includes both cpp and rust in its language distribution. In one published language table, C++ accounts for 4.91% of total code documents, while Rust accounts for 1.00%. That is not a small gap. It means the dataset contains roughly five times more C++ documents than Rust documents in that distribution.
The GitHub picture points in the same direction. GitHub's Octoverse 2024 report lists C++ among the top languages used on GitHub, while Rust is described as rising but not in the top group. In other words, Rust is growing, but C++ still has much broader absolute presence in the public code universe where code assistants learn patterns.
Stack Overflow survey data tells the same story in human terms. In the 2023 survey, 20.21% of professional developers reported using C++, while 12.21% reported using Rust. Among people learning to code, C++ was also used much more often than Rust. Rust was the most admired language, but admiration and corpus size are not the same thing.
This distinction matters because AI agents do not learn from admiration. They learn from exposure, repetition, structure, corrections, conventions, and the density of examples around real tools.
Why "real, live code" matters more than marketing
The language that wins in AI-assisted engineering is not automatically the language with the cleanest theory.
It is usually the language that offers the model:
- more repositories
- more build scripts
- more debugging examples
- more integration patterns
- more platform-specific recipes
- more public incident fixes
- more benchmark code
- more examples of people doing ugly but necessary systems work
That description fits C++ much more strongly than Rust in 2026.
A historical comparison that still matters
It is useful, now and then, to remember that programming languages are not adopted all at once, like laws. They spread the way rivers spread: through terrain, through habit, through the shape of existing settlements.
C++ had decades to do exactly that.
It became embedded in:
- operating systems
- browsers
- databases
- telecom
- finance
- embedded systems
- media processing
- security products
- drivers
- scientific computing
That means C++ did not merely accumulate code. It accumulated consequences. Every incident report, every performance postmortem, every build-system workaround, every vendor integration, every ABI scar added one more piece of real engineering memory to the public world.
Rust, by contrast, entered the scene later and with a clearer philosophy. That gave it a great advantage in design coherence, but not yet the same geological depth in deployed systems.
This is why the public code corpus gap matters more than a superficial percentage table might suggest. It is not only that there is more C++ code. It is that there is more historically layered C++ code: old code, improved code, repaired code, optimized code, ugly code, transitional code, and code that survived contact with hardware, customers, and budgets.
AI agents absorb that density, even when imperfectly. And when they are asked to operate inside the same kind of dense environment, the older riverbed often still guides them.
Why training data volume matters for AI coding agents
People sometimes answer this topic too abstractly. Let us make it concrete.
When an AI agent helps with a production system, it is usually not writing a toy algorithm from scratch. It is doing things like:
- integrating a new library into CMake or Bazel
- fixing ABI issues
- handling platform-specific compiler flags
- converting image or tensor layouts
- adapting code to a vendor SDK
- reducing copies
- pinning threads
- rewriting hot loops
- interpreting compiler diagnostics
- reading a crash backtrace
- adding metrics
- patching code without breaking the surrounding codebase
These tasks are pattern-heavy. They depend on the agent having seen many examples of code plus context plus fixes.
That is exactly why corpus size matters.
There is also a second-order effect: a larger public corpus means more adjacent textual knowledge:
- issue tracker discussions
- Stack Overflow answers
- blog posts
- benchmark repos
- migration guides
- release notes
- profiler screenshots
- linker troubleshooting
This surrounding material teaches the agent not just syntax, but engineering behavior.
An ICLR 2025 paper on the impact of code data in pre-training makes a broader point that is very relevant here: including code in pre-training improves code performance, and different mixtures of code data change downstream results. If code data matters in general, then the amount and maturity of code available for a specific language matters too.
The AI infrastructure stack is still deeply C++
Even if we completely ignore corpus size, the surrounding AI stack still gives C++ an enormous practical advantage.
Look at the layers that serious AI products depend on:
- CUDA exposes a C/C++ programming model
- ONNX Runtime exposes C and C++ APIs
- PyTorch has a C++ frontend and a large C++ core
- oneDNN is a C++-oriented performance library
- OpenVINO is deeply native
- many tokenizer libraries, image codecs, RPC gateways, allocators, and observability tools are native-first
This matters for AI agents because language quality is not only about the language. It is also about the surface area of adjacent systems the language can reach naturally.
If I ask an agent to optimize a C++ inference service, the language model can draw from:
- CUDA examples
- ONNX Runtime docs
- CMake patterns
- compiler flags
- perf and VTune workflows
- allocator tuning patterns
- SIMD examples
- production-style latency engineering
If I ask the same agent to build a similarly deep Rust system, the story is less mature in several places:
- bindings may be thinner
- examples may be fewer
- the happy path may be less documented
- unsafe escape hatches may appear sooner than expected
- interop with existing C++ systems can dominate the work
Rust can absolutely participate in AI systems. But the stack is still not Rust-first. In many important layers it is Rust-adjacent, or Rust-wrapped, rather than Rust-native.
Legacy gravity and why it matters more in the AI era
There is another force that makes C++ stronger than many people expect: legacy gravity.
That phrase sounds negative, but in infrastructure work it usually means:
- proven code
- field-tested integrations
- known operational limits
- mature observability
- existing deployment paths
- public history of bugs and fixes
AI systems rarely arrive in empty environments. They are inserted into companies that already have:
- RPC layers
- storage paths
- image and video pipelines
- custom allocators
- security controls
- device interfaces
- monitoring agents
- compression, serialization, and transport libraries
And in many performance-critical organizations, a large share of that infrastructure is still C or C++.
This matters because AI agents are not only asked to create new code. They are often asked to patch old systems without breaking them. That job becomes easier when the surrounding language is already the dominant systems language in the organization.
Why "messy production C++" is still valuable training data
One of the hidden strengths of the public C++ corpus is that it contains not only elegant examples, but also:
- ugly CMake
- mixed compiler setups
- platform-specific hacks
- vendor SDK integrations
- ABI workarounds
- recovery paths
- long-lived compatibility code
For humans, that can be frustrating. For AI agents, it can be very useful.
Why? Because real production engineering is full of compromise. The language with more public examples of "how people actually survived this integration problem" gives the model more ways to converge on a workable patch.
Why C++ gets better feedback loops in production
This is the most underappreciated part of the whole debate.
A good agent is not just a generator. A good agent improves because it gets fast feedback from the environment:
- compiler errors
- linker errors
- static analysis
- sanitizers
- profilers
- benchmark harnesses
- runtime traces
- integration tests
C++ has a frightening reputation, but it also has an extraordinary amount of tooling for constraining and correcting bad code.
In performance-critical domains, this matters more than people admit
When the goal is "ship a safe greenfield service", Rust often feels more structured.
When the goal is "repair, optimize, and integrate a high-performance system that already talks to drivers, vendor SDKs, trading gateways, or inference runtimes", the C++ toolchain gives both humans and agents a massive amount of leverage:
- Clang and GCC diagnostics
- sanitizers
- perf
- VTune
- heap profilers
- flame graphs
- debuggers
- mature build-system patterns
- enormous historical literature on optimization
In plain English: once the agent writes the code, the ecosystem can tell you very quickly whether that code is stupid.
That feedback loop is priceless.
Why this helps AI-generated code specifically
AI-generated code does not need a perfect language to be useful. It needs:
- a rich corpus for synthesis
- strong tooling for correction
- cheap ways to validate behavior
- predictable integration points
C++ offers all four.
Rust offers some of them extremely well, especially around correctness guarantees, but in many AI-adjacent production domains the surrounding system still "speaks C++" more fluently than it speaks Rust.
What Rust gets right
To make the article honest, we should say clearly what Rust does better.
Rust gets several very important things right:
- Memory safety as a default design goal
- Strong type-level pressure toward explicit ownership
- A disciplined package manager and build story
- An ecosystem culture that rewards correctness and clarity
- Excellent ergonomics for many greenfield services
These are not minor benefits. In some product categories they are decisive.
If your task is a new backend service, an internal tool, a network daemon, or an application where memory safety risk dominates all other concerns, Rust can be an outstanding choice.
But that still does not settle the AI-era question, because the AI-era question is about the combined effect of:
- what the models have seen
- what infrastructure exists
- what feedback loops the team can run
- what legacy systems must be integrated
On that combined question, C++ still leads in many serious domains.
Where Rust is still a weaker bet for AI-assisted delivery
This is where people often get emotional, so let us keep it technical.
1. The public corpus is smaller
That is not an insult. It is just a fact of history. Rust is younger, and its total body of production code is still much smaller than C++.
2. Safe, idiomatic Rust is hard for models
Generating Rust syntax is one thing. Generating good Rust is another.
CRUST-Bench, a benchmark for C-to-safe-Rust transpilation, found that this remains difficult for state-of-the-art models. The paper's framing is important: the hard part is not merely translation. The hard part is reaching safe and idiomatic Rust that still satisfies the specification.
That problem generalizes. In real projects, agents do not just need compilable Rust. They need Rust that respects ownership boundaries, borrows sanely, integrates with crates, and remains maintainable after the patch lands.
3. Interop cost is real
Many AI systems already depend on native code, C ABI surfaces, vendor kernels, legacy C++, or specialized runtimes. In such environments, Rust often arrives through FFI seams. That can still be a good design, but it means part of the promised elegance is paid back as interoperability work.
4. "Safe by default" does not remove systems complexity
The moment you move into:
- custom allocators
- kernel interfaces
- SIMD
- zero-copy parsers
- GPU interop
- lock-free structures
- unsafe bindings
the difficulty level rises again. Rust still helps, but the distance between theory and production gets smaller than people sometimes advertise.
Modern C++ is not the C++ of 1998
One reason this conversation often becomes unproductive is that critics compare modern Rust to ancient, badly written C++.
That is the wrong comparison.
The real comparison in 2026 is:
modern Rust versus disciplined modern C++ with mature tooling.
That means C++ with:
- RAII
- smart pointers
std::spanstd::string_view- move semantics
- explicit ownership boundaries
- warnings enabled
- sanitizers in CI
- profiling culture
- careful APIs
- performance tests
That codebase is not magically safe. But it is much better than the cartoon version of C++ that many debates rely on.
There is also active safety work in the C++ ecosystem itself: the C++ Core Guidelines, safer library patterns, static analysis, bounds-aware abstractions, and stronger tooling expectations all push the language in a safer direction.
So when we say that C++ is the better AI-era default, we should not mean "old reckless C++ with manual lifetime chaos." We should mean modern, disciplined C++ that is friendly to both optimization and automated feedback.
What this looks like in real teams using agents
Let us make the argument less abstract.
Imagine a team asks an AI agent to improve a native inference service. In a C++ environment, the task might be:
- reduce copies on the image preprocessing path
- integrate a new ONNX Runtime execution provider
- fix a CMake misconfiguration
- tune thread affinity
- add profiling markers
- replace allocation-heavy structures with reusable buffers
- shrink a hot structure so it fits cache better
Now look at the feedback loop:
- the code compiles or fails loudly
- warnings provide useful signal
- sanitizers catch obvious lifetime bugs
- perf or VTune shows whether the hotspot moved
- flame graphs show whether the optimization helped the real path
- the rest of the team already understands the language and its tooling
This is a very healthy environment for iterative AI-assisted delivery.
Now imagine an equivalent low-level Rust task in a codebase that still depends on C++ libraries, vendor SDKs, and FFI seams. The agent may still succeed, but the path is often more fragile:
- the ownership model must coexist with foreign APIs
- fewer public examples may exist for the exact integration
- performance tuning can push the design toward
unsafesooner than expected - the surrounding runtime and toolchain may still be C++-shaped
That is the key practical point. C++ does not need the first draft to be perfect. It needs the ecosystem to make the second draft better quickly.
Agents are strongest where local verification is rich
The best AI-assisted workflows are not "generate and pray." They are:
- generate a patch
- compile
- inspect warnings
- run tests
- run profilers
- compare metrics
- refine
For low-level systems work, C++ has one of the richest public traditions of this loop. That is a huge reason it remains the better default target for many AI-assisted production teams.
A practical decision framework
Let us end with something actionable.
Choose C++ first if
- you are building AI inference infrastructure
- you need to integrate with CUDA or other vendor-native SDKs
- latency and profiling dominate the engineering work
- the codebase already contains a lot of C or C++
- you want the strongest match between AI-generated code and public production examples
- your team can enforce modern C++ discipline and validation
Choose Rust first if
- the system is greenfield
- memory safety risk dominates the value equation
- the surrounding ecosystem is already Rust-friendly
- interoperability with native AI libraries is limited and well-bounded
- raw latency is not the only priority
Choose a hybrid architecture if
- you want Rust for some service boundaries and C++ for AI hot paths
- your on-device or inference core must remain in C++, but orchestration services can be Rust
- you are incrementally modernizing an existing native stack
That hybrid answer is often the most realistic one.
But if you force me to choose one language as the better default language for AI-assisted systems engineering today, I still choose C++.
Not because it is prettier. Not because it is safer by default. Not because Rust failed.
I choose it because the combination of corpus scale, infrastructure depth, tooling maturity, and production feedback loops still gives C++ the stronger position in the real world.
Examples and counterexamples in real AI delivery
Let us turn the argument into concrete situations.
Example 1: a native inference service with hard latency targets
Suppose a team has a C++ service that:
- decodes images
- batches work
- runs inference through ONNX Runtime
- performs heavy postprocessing
- exposes profiling hooks
An AI agent is asked to reduce p99 latency.
This is a strong C++ scenario because the agent can patch the actual native hot path, compile immediately, run profilers, and compare before/after behavior with mature tooling. The surrounding ecosystem helps the team improve the draft quickly.
Counterexample 1: a small greenfield control-plane service
Now imagine a totally different task: an internal service that coordinates jobs, stores metadata, and talks to existing APIs, with no real native hot path at all.
Forcing C++ there just because "C++ is better in the AI era" would be poor judgment. Rust may be the better choice because safety defaults, ergonomics, and greenfield clarity matter more than deep native performance.
This counterexample matters because the thesis of the article is not "always choose C++." The thesis is "C++ is still the stronger default in many AI-assisted systems domains."
Example 2: patching a messy native production stack
Suppose the agent must fix a product that already includes:
- CUDA
- CMake
- custom allocators
- vendor SDKs
- native plugins
This is where C++ corpus density becomes very practical. Public code and issue trackers contain far more examples of this kind of native integration mess, so the agent has a stronger chance of producing a patch that looks like something a real systems team would actually merge.
Counterexample 2: assuming more corpus means automatic quality
Here is the trap to avoid: more C++ data does not mean every C++ patch an agent writes will be good.
C++ still allows:
- bad ownership
- hidden copies
- fragile lifetimes
- unreadable APIs
So the real claim is not "AI makes bad C++ impossible." The real claim is "C++ gives agents a stronger starting point in many production environments, and the ecosystem gives teams strong ways to correct weak drafts quickly."
Example 3: iterative performance work
If the workflow is:
- generate a patch
- compile
- run tests
- run profilers
- refine the patch
then C++ often shines because this validation loop is so mature and so well documented.
Counterexample 3: no discipline, no advantage
If a team writes chaotic C++ with:
- no warnings
- no sanitizers
- no tests
- no profiling
- no ownership discipline
then the theoretical advantages in this article will not save them.
The case for C++ in the AI era is really a case for disciplined modern C++ with strong verification loops, not for careless native code.
Where rhetoric misleads engineers
Language arguments become especially unhelpful when they are reduced to slogans. Let us look at a few of the slogans that do the most damage.
Slogan 1: "Rust is newer, therefore it must be better for AI-era systems"
Newness is not a proof of suitability. In systems engineering, age can mean:
- broader ecosystem coverage
- more hardware experience
- richer performance folklore
- more public debugging patterns
There are environments where that matters more than conceptual elegance.
Slogan 2: "C++ is too dangerous for AI-generated code"
This contains a partial truth and then overreaches.
Yes, careless C++ is dangerous. But AI-generated code does not live alone. It lives in toolchains. It is compiled, profiled, tested, benchmarked, and reviewed. In an environment with strong validation, the practical question becomes not "Can the language express danger?" but "How fast can the team detect and eliminate bad drafts?"
On that question, C++ often performs better than critics expect.
Slogan 3: "Rust makes performance work easy"
Rust can make many forms of correctness easier, but performance work remains performance work. The moment you enter:
- FFI-heavy environments
- custom allocators
- GPU interfaces
- very tight data layouts
- concurrency under extreme latency pressure
you are back in the old kingdom of careful measurement, structural tradeoffs, and uncomfortable constraints.
Slogan 4: "One language should own everything"
This is perhaps the most seductive and least trustworthy slogan of them all.
Real AI products are stacks. They often include:
- research code
- orchestration layers
- native runtimes
- accelerators
- preprocessing and tokenization
- deployment packaging
- observability
One language may dominate, but the stack rarely becomes truly monolingual. Engineers who accept this reality usually build better systems than engineers who try to defend language purity at all costs.
Hands-On Lab: Build and improve a native scoring pipeline
If an article about AI-era language choice contains no code, it risks becoming a sermon.
So let us build a small native C++ utility of the kind AI agents are constantly asked to improve in real companies: a text scoring pipeline that loads data, computes simple features, sorts the results, and prints the top rows.
It is modest on purpose. Most production engineering is modest.
main.cpp
#include <algorithm>
#include <chrono>
#include <cctype>
#include <fstream>
#include <iostream>
#include <string>
#include <string_view>
#include <vector>
struct Sample {
std::string text;
double score = 0.0;
};
static int count_digits(std::string_view s) {
int n = 0;
for (unsigned char c : s) {
n += std::isdigit(c) ? 1 : 0;
}
return n;
}
static int count_upper(std::string_view s) {
int n = 0;
for (unsigned char c : s) {
n += std::isupper(c) ? 1 : 0;
}
return n;
}
static int count_punct(std::string_view s) {
int n = 0;
for (unsigned char c : s) {
n += std::ispunct(c) ? 1 : 0;
}
return n;
}
static double score_line(std::string_view s) {
const auto len = static_cast<double>(s.size());
const auto digits = static_cast<double>(count_digits(s));
const auto upper = static_cast<double>(count_upper(s));
const auto punct = static_cast<double>(count_punct(s));
return len * 0.03 + digits * 0.7 + upper * 0.15 - punct * 0.05;
}
int main(int argc, char** argv) {
if (argc < 2) {
std::cerr << "usage: scorer <input-file>\n";
return 1;
}
std::ifstream in(argv[1]);
if (!in) {
std::cerr << "cannot open input file\n";
return 1;
}
std::vector<Sample> rows;
rows.reserve(200000);
std::string line;
while (std::getline(in, line)) {
rows.push_back({line, 0.0});
}
const auto t0 = std::chrono::steady_clock::now();
for (auto& row : rows) {
row.score = score_line(row.text);
}
std::sort(rows.begin(), rows.end(), [](const Sample& a, const Sample& b) {
return a.score > b.score;
});
const auto t1 = std::chrono::steady_clock::now();
const auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(t1 - t0).count();
std::cout << "processed " << rows.size() << " rows in " << ms << " ms\n";
const size_t limit = std::min<size_t>(5, rows.size());
for (size_t i = 0; i < limit; ++i) {
std::cout << rows[i].score << " | " << rows[i].text << "\n";
}
}
Build
On Linux or macOS:
g++ -O2 -std=c++20 -o scorer main.cpp
./scorer sample.txt
On Windows with MSVC:
cl /O2 /std:c++20 main.cpp
.\main.exe sample.txt
Why this tiny program is useful
Because it is exactly the kind of code where AI-assisted engineering becomes tangible:
- it is native
- it touches strings and memory
- it has a measurable runtime
- it can be profiled
- it can be improved incrementally
That is the real habitat of many C++ agents today: not grand demonstrations, but ordinary native programs that need to become better without being reinvented.
Test Tasks for Enthusiasts
If you want to turn the article into a practical exercise, try these:
- Ask your favorite coding agent to optimize the program without changing output. Inspect whether it reduces duplicate passes or unnecessary temporaries.
- Add separate timing for file loading, scoring, and sorting. Verify where the time really goes.
- Replace the input with one million lines and compare the quality of optimizations suggested by different agents.
- Port the utility to Rust and compare the experience honestly: what felt clearer, what felt heavier, and what surrounding tooling felt more mature for this exact task.
- Run the C++ version under a profiler and write down whether your first guess about the hotspot was actually right.
This is a small exercise, but that is precisely why it is useful. Most engineering debates become more truthful when they are forced to survive contact with a small real program.
Summary
Rust deserves respect. It solved real problems and pushed the industry forward.
But the AI era rewards more than language design elegance. It rewards the language that sits at the center of the largest living code corpus, the densest systems ecosystem, the deepest performance literature, and the strongest validation loop from source code to shipped binary.
Today, that language is still C++.
And that is why AI coding agents, in many serious production environments, still tend to be more useful, more effective, and easier to operationalize when the target system is written in C++ rather than Rust.
References
- GitHub Octoverse 2024: https://github.blog/news-insights/octoverse/octoverse-2024/
- GitHub Octoverse 2025: https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/
- Stack Overflow Developer Survey 2023: https://survey.stackoverflow.co/2023
- Stack Overflow Developer Survey 2025 Technology section: https://survey.stackoverflow.co/2025/technology/
- The Stack dataset card: https://huggingface.co/datasets/bigcode/the-stack
- The Stack paper: https://arxiv.org/abs/2211.15533
- ICLR 2025 paper on the impact of code data in pre-training: https://openreview.net/pdf?id=zSfeN1uAcx
- CRUST-Bench: A Comprehensive Benchmark for C-to-safe-Rust Transpilation: https://arxiv.org/abs/2504.15254
- CUDA C++ Programming Guide: https://docs.nvidia.com/cuda/cuda-c-programming-guide/
- ONNX Runtime C/C++ API: https://onnxruntime.ai/docs/api/c/index.html
- PyTorch C++ frontend documentation: https://docs.pytorch.org/cppdocs/frontend.html
- C++ Core Guidelines: https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines