This post provides a short experience report about my approach to migrating my toy project jt-computing to the latest features of C++-26.
Noone depends on the project,
I have full control over all aspects and thats why I can just fiddle around and see what works.
The GNU (gcc) and LLVM (clang) implementations of the compiler,
standard library and the rest of the ecosystem support the described features as of 2026-05-02 in their latest versions.
The project has only one external dependency,
catch2 for tests and benchmarks.
Converting jt-computing to modules hopefully gives me a bit of experience and insights on how to actually do the build definition and basic mechanics that I can later apply to bigger real world projects.
My approach and experience may help you do the same for your toy project or even help with a production code base.
Tooling Setup
I use two systems for coding, my desktop PC with gentoo Linux for customization and my laptop with Fedora Linux. Both allow me to install very recent version of the necessary programming tools:
cmake-4.3gcc-16mold-2.40as linker forgccnvim-0.11andclangd-22clang-22, using the full LLVM stack, includinglibc++andlld
I prefer gentoo for the development tasks,
because it is easier to get the bleeding edge versions of all tools,
as one can usually install “from git”.
For simpler day-to-day usage of the full LLVM stack,
I installed clang on gentoo with additional compile time configuration to use LLVM tools by default.
# File: /etc/portage/package.accept_keywords/development
sys-devel/gcc ~amd64
llvm-core/llvm-common ~amd64
llvm-core/llvm ~amd64
llvm-core/llvm-toolchain-symlinks ~amd64
llvm-core/clang-common ~amd64
llvm-core/clang ~amd64
llvm-core/clang-linker-config ~amd64
llvm-core/clang-toolchain-symlinks ~amd64
llvm-core/llvmgold ~amd64
llvm-core/lld ~amd64
llvm-core/lld-toolchain-symlinks ~amd64
llvm-core/lldb ~amd64
llvm-runtimes/clang-runtime ~amd64
llvm-runtimes/clang-stdlib-config ~amd64
llvm-runtimes/clang-rtlib-config ~amd64
llvm-runtimes/clang-unwindlib-config ~amd64
llvm-runtimes/compiler-rt ~amd64
llvm-runtimes/compiler-rt-sanitizers ~amd64
llvm-runtimes/libcxx ~amd64
llvm-runtimes/libcxxabi ~amd64
llvm-runtimes/libunwind ~amd64
llvm-runtimes/openmp ~amd64
dev-python/lit ~amd64
# File: /etc/portage/package.use/development
>=llvm-core/clang-common-22 default-libcxx default-lld default-compiler-rt
>=llvm-core/clang-linker-config-22 default-lld
>=llvm-runtimes/clang-runtime-22 default-lld default-libcxx default-compiler-rt default-lld
>=llvm-runtimes/clang-rtlib-config-22 default-compiler-rt
>=llvm-runtimes/clang-unwindlib-config-22 default-compiler-rt
>=llvm-runtimes/clang-stdlib-config-22 default-libcxx
>=llvm-runtimes/libunwind-22 static-libs
sys-libs/libunwind static-libs
virtual/zlib static-libs
sys-libs/zlib static-libs
Project Layout
The repository and project layout was generated from a modern CMake template, the original author I forgot (sorry!). It is structured as follows:
include/andlib/define and implement the library interfaceinclude/can be easily installed into a system by just copying the contents to the system headers directorytest/is a discrete CMake project that implements unit and integration tests of thelib/components usingcatch2bin/separate directory for CLI tools that uselib/, in theory compilable independently with system installedlib/andinclude/componentscmake/contains support codebuild*/build directories for the various compilers, ignored ingit
$ tree -L1 bin lib include test
bin
├── calculate_fibonacci.cpp
├── CMakeLists.txt
├── collatz_chain.cpp
├── find_prime_numbers.cpp
├── ...
lib
├── container
├── core
├── crypto
└── math
include
└── jt-computing
test
├── CMakeLists.txt
└── lib
The project structure works well and helped me in the transition to modules. I would like to redo the cmake definition of it, as I find it a bit too verbose and messy. Properly installing the project to a system library does not work either and I did not evaluate changes in that respect.
Introducing import std;
Starting to use import std; is mostly a cmake change and requires an up-to-date toolchain.
It is mandatory to use ninja as the build system (e.g. with cmake -B build_dir -S . -G Ninja).
The standard must be changed to at least C++23,
I use C++26 to enable even more features.
cmake’s support for importing the standard library module is experimental and requires setting a feature gate.
The proper value must be looked up in the documentation of the corresponding cmake version.
Note, that the feature gate must be enabled before your project() call.
cmake_minimum_required(VERSION 4.2)
set (CMAKE_EXPERIMENTAL_CXX_IMPORT_STD d0edc3af-4c50-42ea-a356-e2862fe7a444)
project("JTComputing" VERSION 0.1.0 LANGUAGES CXX)
set (CMAKE_CXX_STANDARD 26)
set (CMAKE_CXX_MODULE_STD ON)
set (CMAKE_CXX_SCAN_FOR_MODULES ON)
Setting these properties can be done on a per-target basis, too.
function(jt_compile_setup target)
set_target_properties(${target}
PROPERTIES
CMAKE_CXX_STANDARD cxx_std_26
CMAKE_CXX_MODULE_STD ON
CMAKE_CXX_SCAN_FOR_MODULES ON
)
endfunction()
From inspecting the generated build commands,
it seems that gcc requires GNU extensions for the standard library module support,
but I am not aware of the details.
The following code transformation introduced import std;:
- Perform a project-wide string search for
#include <, usingtelescope.nvim. - Highlight each found standard header using
Tabin the picker and finally open the files in the quick-fix list viaAlt-q. - Cycle through all locations of the quick-fix list via
]q, remove each standard include and addimport std;after all#includedirectives. - Compile and test.
- I had to outcomment or remove a few standard macros like
assert(see contracts) andCHAR_BIT. - I had no issues with C standard library functions used in the global namespace – you can use
import std.compat;in these situations.
- I had to outcomment or remove a few standard macros like
This process is a good candidate for a clang-tidy > modernize check to automate the cumbersome work.
Code Navigation Sidequest
Using the gcc generated compile_commands.json with modules lead to warnings about unknown arguments, stemming from modules flags.
Instead, I maintain a second build directory using the full LLVM toolchain and link the compile_commands.json from there into my source directory.
clangd’s modules support is still experimental, so it must be started with the --experimental-modules-support flag – adjust your editors LSP setup accordingly.
After doing so, I received error messages about mismatching compiler versions when consuming the internal std.pcm files from clangd, resulting in this bug report for gentoo.
The issue was present on Fedora, too.
Digging around in the logs, LLVM code base and cmake definition and finally clearing my mind by touching grass I figured the problem out.
The root cause was managing clangd via mason in nvim that includes different VCS information than the system’s clang compiler used to build the project.
Resolving this issue is of course simple, not doing that.
It may be a recurring situation though,
as its quiet common to install clangd via your editor’s/IDE’s packaging instead as part of your system compiler distribution.
NOTE: the produced module artifacts of the build are not standardized and need recompilation for each compiler,
even between different compiler versions,
hence the warning.
Syntax Highlighting Sidequest
Another nvim related issue was syntax highlighting.
The latest released tree-sitter-cpp-0.23.4 is missing highlight groups for the module keywords, checked using :TSHighlightCapturesUnderCursor.
The upstream project already contains the necessary code on master, but lacks a release.
Apparently, the maintainers with the power-to-release are currently inactive (Bug Comment).
I created the fork JonasToth/tree-sitter-cpp and added a v9999 tag to the latest commit on master.
Installation on my system uses my personal gentoo overlay:
- remove
cppfrom thenviminstalled tree-sitter parsers (and/or:TSUninstall cpp) - add
dev-libs/tree-sitter-cpp **to/etc/portage/package.accept_keywords/development - install by
emerge --sync jonas-overlay ; emerge --ask dev-libs/tree-sitter-cpp::jonas-overlay
Finally, the code-writing experience is on par with good old header includes.
Migration to C++ Modules
My approach was inspired from the blog posts of Adrian Bühlmann and additional resources for module insights:
- Converting an App to Modules
- Unneeded Recompilations when using Modules
- C++20 Modules: Best Practices from a User’s Perspective
- Rubén Pérez’s Blog Posts about Modules
High Level Approach
- Each
lib/subdirectory becomes a module, e.g.jt.Mathorjt.Crypto. - Each test is part of the corresponding module as an internal partition (Recommendation from chuanqixu9)
- Individual test cases have a 1-to-1 mapping of file to executable – this is maintained from the header-based version of the tests.
- The
include/directory becomes obsolete as the project will only consist of.cppand.cppmfiles.
Technical Implementation
First, the build definition needs to manage source files for executables and libraries via target_sources().
cmake introduced the concept of a FILE_SET to support modules.
add_library(JTComputing)
target_sources(JTComputing
PUBLIC
FILE_SET cxx_modules TYPE CXX_MODULES FILES
lib/container/Container.cppm
lib/container/BitVector.cpp
# ...
)
Each test exectuable gets a similar target_sources > FILE_SET to build the code as module.
add_executable(BitVector_Tests)
target_sources(BitVector_Tests
PRIVATE
FILE_SET cxx_test_modules TYPE CXX_MODULES FILES
test/lib/container/BitVector.cpp
)
target_link_libraries(BitVector_Tests
PUBLIC
Catch2::Catch2WithMain
Catch2::Catch2
JTComputing::JTComputing
)
add_test(NAME BitVector COMMAND BitVector_Tests)
Migration Pseudo-Algorithm
- for each subcomponent in
lib/- add a
lib/<Component>/<Component>.cppmfile that defines the module interface - move contents of
<Aspect>.hppfile to the top of the corresponding<Aspect>.cppfile - add the
<Component>.cppmand<Aspect>.cppfile to theFILE_SETin the cmake project - the
<Aspect>.cppfile exports a module partition matching its name usingexport module jt.<Component>:<Aspect> - the partition is added to the
<Component>.cppmas export usingexport import :<Aspect>; - delete all header includes of
<Aspect.hpp>throughout the project and introduce the matchingimport jt.<Component>;if not already present in the user file (this can be done the same way as described forimport std;above) - export the whole namespace definition using
export namespace jt::<Component>in the new module file<Aspect>.cpp - delete
<Aspect>.hppand remove it from the cmake definitions if present
- add a
// Example of lib/container/Container.cppm
export import jt.Core;
export module jt.Container;
export import :BitVector;
// Example for lib/container/BitVector.cpp
export module jt.Container:BitVector;
import std;
import jt.Core;
export namespace jt::container {
class BitVector {
public:
BitVector() = default;
/// Construct a @c BitVector that has enough bits to represent @c value and
/// assign @c values bit pattern to the individual bits.
explicit BitVector(unsigned_integral auto value);
/// Construct a @c BitVector with initial capacity of at least @c length bits.
BitVector(usize length, bool initialValue);
/// ... Implementation is at the bottom of the file.
};
}
- for each testcase in
test/- add the global module fragment and include
catch2headers - define a private module partition
module jt.<Component>:Test<Aspect>; - ensure the file is part of the
FILE_SETfor the test executable
- add the global module fragment and include
// Example for test/lib/container/BitVector.cpp
module;
#include <catch2/catch_test_macros.hpp>
module jt.Container:TestBitVector;
import std;
// This import is necessary!
import jt.Container;
using namespace std;
using namespace jt;
using namespace jt::container;
TEST_CASE("BitVector Construction", "") {
SECTION("Default Construction") {
BitVector b;
REQUIRE(b.capacity() == 0);
REQUIRE(b.size() == 0);
}
}
// ...
Hindsight and Learnings
- Because a
<Component>module is split into multiple partitions, the individual partitions may need additionalimport :<OtherAspect>imports internally to have all dependent components available. - Try to perform the conversion
<Component>wise, mixing includes in the global module fragment with imports until having a compiling state after each<Component>. - Containing all code in
namespaces made theexporttrivial. - If the project is small enough, the conversion can be done in one sessions.
- The module structure of the project finally mirrors the previous header/implementation structure – I like this property because refactorings and migrations can be sequenced with well defined inbetween states.
- The fully modularized structure can be adjusted further and follow current best practices:
- breaking up build time dependencies by splitting interface and implementation into separate
.cppfile (Cpp Files to Break Build-Dependencies) - improving the partition structure and potentially reducing the number of partitions
- reducing the exported interface by selectively exporting classes and functions instead of the whole namespace in each file
- breaking up build time dependencies by splitting interface and implementation into separate
- You are free to reshuffle all file related aspects of your code within a single module without breaking the module user.
Convert to using namespace std; everywhere
Once the code base is fully modularized it is possible to use using namespace std; everywhere and have it as a default.
Because there is no textual header inclusion, the using directive doesn’t bleed into other headers and implementations.
Just add using namespace std; after each import std; or to other using namespace ...; sections already present.
Then perform a textual replacement of 'std::' => '' and fix compiler errors.
Using C++ Contracts
Starting point for contract assertions in the code base was the good old assert(); macro, already used to state pre/post conditions and invariants.
The modularized code base could not use assert() anymore, because import std; does not export macros.
Of course it would be possible to add #include <cassert> in the global module fragment, but given the small code size, temporary outcommenting assert() worked better.
gcc-16 is currently the only shipping compiler with contracts support, so I decided to use macros after all to still compile with clang.
The following steps enabled contracts:
- reintroduce
include/and addinclude/jt-computing/core/Contracts.hpp, included in the global module fragment of contracts users - pass
-fcontractsand-fcontract-evaluation-semantic=enforcetogccthrough quick-and-dirty extension of thetarget_compile_options()andtarget_link_options()incmake
I expect future releases of cmake to expose the evaluation semantic through typical target properties and invoke the compiler correctly if C++26 is the target standard.
The assertion macros just pass through to the proper contracts keywords.
// File: include/jt-computing/core/Contracts.hpp
#pragma once
#ifdef __clang__
# define PRE(...)
# define POST(...)
# define CONTRACT_ASSERT(...)
#else
# define PRE(...) pre(__VA_ARGS__)
# define POST(...) post(__VA_ARGS__)
# define CONTRACT_ASSERT(...) contract_assert(__VA_ARGS__)
#endif
// Example include for normal consumer that does not define a module.
#include "jt-computing/core/Contracts.hpp"
import std;
import jt.Core;
import jt.Math;
// ...
class Dist {
public:
explicit constexpr Dist(i32 d) PRE(d >= 0) : value{d} {}
// ...
// Example include for a module using the contract macros.
module;
#include "jt-computing/core/Contracts.hpp"
export module jt.Container:BitVector;
// ...
Once clang supports contracts, the macros will disappear again using simple search-and-replace.
Quick and Dirty Build-Time Comparison
A C++ post requires time measurements, so lets measure compile times of clean builds.
Please note, this is still a toy project.
I don’t want you to make definite conclusions from these results.
Only gcc-16 together with mold-2.40 and libstdc++ is measured!
Header-based Project
The old header based project revision is tracked on the branch jt-computing-old at 3fb47bb32d36aad15943bf82bb64fac1f0901eac.
I had to backport minor changes to the build definition to consume Catch2 via FetchContent and don’t perform module scanning.
Building Catch2 is done independently and in total 3 clean builds were measured.
$ cmake \
--fresh \
-B build_timed \
-S . \
-G Ninja \
-DCMAKE_LINKER_TYPE=MOLD \
-DCMAKE_BUILD_TYPE=RelWithDebInfo
$ function measure() {
cmake --build build_timed --target clean
cmake --build build_timed --target Catch2 Catch2WithMain
time cmake --build build_timed -- -j${1}
}
$ measure 8 # 3 times
$ measure 32 # 3 times
> ...
> [82/82] Linking CXX executable bin/percolation_power.x
| Cores | Average Time in Seconds | Raw Values |
|---|---|---|
| 32 | ~4.4s | 4.33 4.48 4.38 |
| 8 | ~7.0s | 6.97 6.97 6.99 |
Module-based Project
The modules and contracts based revision is on master at bbc4900b28f07e8d516566bb49008968ce7ad86f
Building performs module scanning and producing the standard library module.
$ measure 8 # 3 times
$ measure 32 # 3 times
> ...
> [117/117] Linking CXX executable bin/percolation_power.x
| Cores | Average Time in Seconds | Raw Values |
|---|---|---|
| 32 | ~7.6s | 7.57 7.63 7.57 |
| 8 | ~9.5s | 9.45 9.46 9.44 |
Sadly, the modules version performs slower clean builds. Happily I don’t have a lot of time to increase the size of the project, so its fast to build anyway /s.
In all seriousness, I am surprised to see the significant increase in compile time.
The build takes 35 steps more due to scanning for module definitions before acutal compilation happens.
Maybe the regression is related to my decision of having the tests as part of the modules.
I want to revisit this point and see, if I can improve the build speed by adjusting my module structure.
Measuring incremental builds may restore the module’s honor, but I want to finish the blog post 😅. The incremental builds feel quiet fast and I suspect not building the standard library module makes a big difference. Incremental builds of the project suffer from unnecessary build time dependencies from the module structure. With a bit more experience and optimization, I want to remeasure, including incremental builds.
Conclusion
The migration was easier than I thought but harder than I hoped. As you might imagine, the time consuming part was adjusting all the tooling, versions and libraries to have a good experience. Finally changing the code was quiet fast.
I read about modules over the years to keep up to date but was honestly confused by the “new words” I had no connection to, like global module fragment.
The conversion gave me a clearer picture on what the different aspects of modules mean and how they interact with each other.
A migration of a bigger project seems achievable as long as the code is already “modularized” in spirit.
If I had to migrate a production code base, I would start with the foundational components and perform multiple end-to-end transformations the way I described above.
Quick-and-dirty python scripts would likely suffice to perform the bulk of the changes with manual interventions and fixups to keep the code compiling.
clang-tidy based introduction of import std; seems possible, maybe I can renew my rusty clang-tidy knowledge and hack on that a bit.
In my opinion, clangd support for modules is as important as compiler support to not regress into “toolless development”.
The syntax highlighting issues in nvim are an annoyance, eventually resolved.
I am glad to see that the whole development ecosystem adopts modules and hope for acceleration with gcc-16 providing better support.
Removing std:: everywhere improved readability and is a welcome change to C++.
I am looking forward to not remember header names and accidentally missing includes that lead to compiler errors after toolchain updates.
Thank you for your effort to everyone involved in the continued evolution of C++ and its tools!