Skip to content

Demo - MultiPhase GCC12 Compiler Bootstrap#

We use our tools to build our tools. But we need to get started on a new platform by building our basic compiler suite.

Using caching in GitHub Actions, we can connect multiple toolchain workflows to each other. Dependencies built in one phase will be pre-installed for use in subsequent phases.

gcc-toolchain Workflow Job Structure

conan-demoToolchain.yml

├── bootstrap_container_image - conan-build-container.yml
│ │
│ ├── conan-base
│ │ │
│ │ ├── ubuntu-aarch64 - docker-singlePlatform.yml
│ │ ├── ubuntu-x86_64 - docker-singlePlatform.yml
│ │ ├── almalinux-aarch64 - docker-singlePlatform.yml
│ │ └── almalinux-x86_64 - docker-singlePlatform.yml
│ │
│ └── conan-bootstrap
│ │
│ ├── ubuntu-aarch64 - docker-singlePlatform.yml
│ ├── ubuntu-x86_64 - docker-singlePlatform.yml
│ ├── almalinux-aarch64 - docker-singlePlatform.yml
│ └── almalinux-x86_64 - docker-singlePlatform.yml

├── phase 1 - conan-multiPlatformToolchain.yml
│ │
│ ├── ubuntu-aarch64 - conan-toolchain.yml
│ ├── ubuntu-x86_64 - conan-toolchain.yml
│ ├── almalinux-aarch64 - conan-toolchain.yml
│ └── almalinux-x86_64 - conan-toolchain.yml

├── phase 2 - conan-multiPlatformToolchain.yml
│ │
│ ├── ubuntu-aarch64 - conan-toolchain.yml
│ ├── ubuntu-x86_64 - conan-toolchain.yml
│ ├── almalinux-aarch64 - conan-toolchain.yml
│ └── almalinux-x86_64 - conan-toolchain.yml

├── phase 3 - conan-multiPlatformToolchain.yml
│ ├── ubuntu-aarch64 - conan-toolchain.yml
│ ├── ubuntu-x86_64 - conan-toolchain.yml
│ ├── almalinux-aarch64 - conan-toolchain.yml
│ └── almalinux-x86_64 - conan-toolchain.yml

└── build_container_image - conan-build-container.yml

├── conan-base
│ │
│ ├── ubuntu-aarch64 - docker-singlePlatform.yml
│ ├── ubuntu-x86_64 - docker-singlePlatform.yml
│ ├── almalinux-aarch64 - docker-singlePlatform.yml
│ └── almalinux-x86_64 - docker-singlePlatform.yml

├── conan-bootstrap
│ │
│ ├── ubuntu-aarch64 - docker-singlePlatform.yml
│ ├── ubuntu-x86_64 - docker-singlePlatform.yml
│ ├── almalinux-aarch64 - docker-singlePlatform.yml
│ └── almalinux-x86_64 - docker-singlePlatform.yml

├── conan-build
│ │
│ ├── ubuntu-aarch64 - docker-singlePlatform.yml
│ ├── ubuntu-x86_64 - docker-singlePlatform.yml
│ ├── almalinux-aarch64 - docker-singlePlatform.yml
│ └── almalinux-x86_64 - docker-singlePlatform.yml

└── conan-docker-build

├── ubuntu-aarch64 - docker-singlePlatform.yml
├── ubuntu-x86_64 - docker-singlePlatform.yml
├── almalinux-aarch64 - docker-singlePlatform.yml
└── almalinux-x86_64 - docker-singlePlatform.yml

Pre-Requisites#

Nexus package upload configured and implemented#

Currently supported:

  • RPM/Yum
  • Deb/Apt

GitHub Actions secrets and variables for workflow#

Phase 3 of the demo uploads the final binary packages to Nexus when configured as follows:

Secrets:

  • NEXUS_CI_PASSWORD: '<password for $NEXUS_CI_USER>'

Variables:

  • NEXUS_SERVER: "<nexus-server-url>"
  • NEXUS_YUM_REPO: "<yum-repository-name>"
  • NEXUS_APT_REPO: "<apt-repository-name>"
  • NEXUS_CI_USER: '<nexus-write-update-user>'

bootstrap_container_image#

Before we can start on the toolchain, we need minimal base and bootstrap container images that can run conan and build the toolchain using the OS vendor's gcc toolchain packages.

conan-build-container

These images provide basic conan functionality and GCC toolchain from OS Vendor provided packages. This demo uses these raw OS Vendor images to assemble our GCC toolchain build image.

  • conan-base-${os_name}:${arch}-latest
  • conan-bootstrap-${os_name}:${arch}-latest

OS/Platform support includes:

  • AlmaLinux 9.6 (x86_64, aarch64)
  • Ubuntu 24.04LTS (x86_64, aarch64)

Phase 1 - GNU binutils#

GCC depends on binutils, so that's where we start. We need a working compiler to build our tools though, so we'll use a special bootstrap container image with the OS Vendor's compiler chain installed.

phase 1 - conanfile.py
    # For this bootstrapping phase we'll depend on OS vendor-provided tools
    def system_requirements(self):
        Apt(self).install(["make", "cmake", "binutils", "gcc"])
        Yum(self).install(["make", "cmake", "binutils", "gcc"])

    def requirements(self):
        self.requires("binutils/2.42")

Link to phase 1 conanfile.py

Phase 2 - bootstrapping CMake, GNU Make, and GCC#

In Phase 2, we'll build our GCC configured to use our binutils package from Phase 1. We're still depending on the system GCC however, so we still want the OS Vendor's binutils available for their GCC as well.

phase 2 - conanfile.py
    # For this bootstrapping phase we'll depend on OS vendor-provided tools
    def system_requirements(self):
        Apt(self).install(["make", "cmake", "binutils", "gcc",
                           "opt+toolchain-binutils=2.42-1",
                          ])
        Yum(self).install(["make", "cmake", "binutils", "gcc",
                           "opt-toolchain-binutils=2.42-1",
                          ])

    def requirements(self):
        self.requires("make/4.4.1")
        self.requires("cmake/4.0.1")
        self.requires("gcc/12.2.0")

Link to phase 2 conanfile.py

Phase 3 - Clean rebuilds using our toolchain#

Finally, we will go back to our OS-minimal container image, install the tools we built in the previous phases, and rebuild all of our tools.

By using the OS-minimal image this time, we can be certain that there will be no surprise dependencies on the original OS Vendor toolchain.

We only upload these final builds to Artifact Management, along with a brand new Conan Build container image for building all other tools we support. The previous packages built with our bootstrap image are discarded.

phase 3 - conanfile.py
    # Finally, we use our tools from phase 1 and 2 to build our tools again
    def system_requirements(self):
        Apt(self).install(["opt+toolchain-make=4.4.1-1",
                           "opt+toolchain-cmake=4.0.1-1",
                           "opt+toolchain-gcc=12.2.0-1",
                           "opt+toolchain-binutils=1.42-1",
                          ])
        Yum(self).install(["opt-toolchain-make=4.4.1-1",
                           "opt-toolchain-cmake=4.0.1-1",
                           "opt-toolchain-gcc=12.2.0-1",
                           "opt-toolchain-binutils=2.42-1",
                          ])

    def requirements(self):
        self.requires("make/4.4.1")
        self.requires("cmake/4.0.1")
        self.requires("binutils/2.42")
        self.requires("gcc/12.2.0")

Link to phase 3 conanfile.py

For each platform, we end up with an overall flow that looks like this

Conan Toolchain Demo

build_container_image#

Now that the Workflows have worked their magic with the Vendor Images to build and publish our GCC Toolchain, we can build our final form Toolchain builder images.

conan-build-container

These images add our own Conan GCC Toolchain and other useful toolchain construction tools onto the base image.

  • conan-build-${os_name}:${arch}-latest
  • conan-docker-build-${os_name}:${arch}-latest

OS/Platform support includes:

  • AlmaLinux 9.6 (x86_64, aarch64)
  • Ubuntu 24.04LTS (x86_64, aarch64)

See Also#

conan-docker-tools

The Conan project builds similar images for this purpose, but with a few significant differences:

  • conan-docker-tools does a custom compiler build directly in their Dockerfile and does not use Conan at all.
  • We use the same Conan recipes and mechanism (Conan) to build the toolchain container that we provide to developers.
  • Using system packaging for our toolchain allows temporal-parallelism for our container builds. Changing container configuration or installing updates does not necessarily require rebuilding the toolchain.
  • We build both Ubuntu and AlmaLinux (RedHat derived), as well as ARM and x86 CPU architectures.