Building From Source#

This guide shows you how to build stdgpu from source. Since we use CMake for cross-platform building, the building instructions should work across operating systems and distributions. Furthermore, the prerequisites cover specific instructions to install the build dependencies on Linux (Ubuntu) and Windows.

Prerequisites#

Before building the library, please make sure that all required development tools and dependencies for the respective backend are installed on your system. The following table shows the minimum required versions of each depenency. Newer versions of these tools are supported as well.

  • C++17 compiler

    • GCC 9: sudo apt install g++

    • Clang 10: sudo apt install clang

  • CUDA compiler

    • NVCC (Already included in CUDA Toolkit)

    • Clang 10: sudo apt install clang

  • CUDA Toolkit 11.5: https://developer.nvidia.com/cuda-downloads

  • CMake 3.18: sudo apt install cmake

  • C++17 compiler with OpenMP 2.0

    • GCC 9: sudo apt install g++

    • Clang 10: sudo apt install clang libomp-dev

  • CMake 3.18: sudo apt install cmake

  • thrust 1.13.1: NVIDIA/thrust

  • C++17 compiler

    • GCC 9: sudo apt install g++

    • Clang 10: sudo apt install clang

  • HIP compiler

    • Clang (Already included in ROCm)

  • ROCm 5.1 RadeonOpenCompute/ROCm

  • CMake 3.21.3: sudo apt install cmake

While the instructions will likely also work for instance for Debian, other Linux distributions (e.g. Arch Linux and derivatives) may use a different naming scheme for the required packages.

Downloading the Source Code#

In order to get the source code needed for building, the most common approach is to clone the upstream GitHub repository.

git clone https://github.com/stotko/stdgpu

Build Instructions#

Since stdgpu is built with CMake, the usual steps for building CMake-based projects can be performed here as well. In the examplary instructions below, we built stdgpu in Release mode and installed it into a local bin directory. Different modes and installation directories works can be used in a similar way.

Configuring#

First, a build directory (usually build/) should be created and the build configuration should be evaluated by CMake and stored into this directory.

mkdir build
cmake -B build -S . -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=bin
bash tools/backend/configure_cuda.sh Release

If you leave out CMAKE_INSTALL_PREFIX, CMake will automatically select an appropriate (platform-dependent) system directory for installation instead.

See also

A complete list of options used via -D<option>=<value> to control the build of stdgpu can be found in Configuration Options.

Compiling#

Then, the library itself as well as further components (examples, benchmarks, or tests depending on the configuration) are compiled. To speed-up the compilation, the maximum number of parallel jobs used for building can be specified to override the default number.

cmake --build build --config Release --parallel 8
bash tools/build.sh Release

Testing#

Running the unit tests is optional but recommended to verify that all features of stdgpu work correctly on your system. This requires the CMake option STDGPU_BUILD_TESTS to be set to ON during configuration (already the default if not altered), see Configuration Options for a complete list of options.

cmake -E chdir build ctest -V -C Release
bash tools/run_tests.sh Release

Installing#

As a final optional step, you can install the locally compiled version of stdgpu on your system.

cmake --install build --config Release
bash tools/install.sh Release

Configuration Options#

Building stdgpu from source can be customized in various ways. The following list of CMake options divided into build-related and library-related options can be used for this purpose.