-
VecGeom::vecgeomcuda and VecGeom::vecgeom has circular dependencies. vecgeomcuda uses the shape function implemented in vecgeom. vecgeom is requires some code that exist only in vecgeomcuda (eg. size of the some of the content of the namespace vecgeom::cuda) Users will link against VecGeom::vecgeom in all cases, in particular when building without CUDA enabled. If vecgeomcuda depends on vecgeom (the case for v1.2.10 and older) then users needs to explicitly add the dependency on vecgeomcuda when enable CUDA support. If vecgeom depends on vecgeomcuda, users don't need to change anything when enable CUDA support (and CudaRdcUtils will handle the dependency). However, if vecgeom depends on vecgeomcuda there is cases where some symbols in vecgeomcuda are not resolved (somehow the link should do another pass and/or we would need to add vecgeom 'again' after vecgeomcuda). Conclusion there is no good way to handle this split of library with circular dependency and as is, it does not help to build a non-cuda executable with a CUDA enable build of VecGeom. So there is 2 solution: 1. Tweak the code and move things around to that vecgeom no longer depends on vecgeomcuda. 2. Merge the 2 libraries into one. Until there is a strong business case for implementing 1., this commit does the much easier 2. * Add backward compatible ALIAS VecGeom::vecgeomcuda_static * Increment CudaRdcUtils version to 10 * Always explictly add cuda runtime to executable. If the executable source uses the CUDA API only in C++ files, cmake can not detect the need to add the CUDA runtime library. However since the type of the runtime library (static vs shared) depends on the time of the depend library, it is not easy for the user to known which one to add. For now, just always add the proper runtime library to the executable runtime library. * Detect late addition of CUDA source files. If target_sources is called explicitly the target is turned into an RDC executable only at the next call and to cuda_rdc_target_link_libraries and there we need to make sure to turn on the RDC linking. Note: if the same happens for a library, it might not yet be supported either. * Increment CudaRdcUtils version to 8 * cuda_rdc_check_cuda_runtime: ignore non-targets. An example of entity that will be a non target but is on the list to be looked at include the target that were brought in by inside a nested include and was not declared global; the target introduced by that find_package will go "out of scope" when the scope the `find_package` was called in ends. * Prepend reversed order of (static) libs as needed for static builds. * Change default to shared library ON * fix typo in test/VecGeomTest/CMakeLists.txt. This affects the case of G4 build without Geant4::G4persistency * Fold libvecgeomcuda into libvecgeom VecGeom::vecgeomcuda and VecGeom::vecgeom has circular dependencies. vecgeomcuda uses the shape function implemented in vecgeom. vecgeom is requires some code that exist only in vecgeomcuda (eg. size of the some of the content of the namespace vecgeom::cuda) Users will link against VecGeom::vecgeom in all cases, in particular when building without CUDA enabled. If vecgeomcuda depends on vecgeom (the case for v1.2.10 and older) then users needs to explicitly add the dependency on vecgeomcuda when enable CUDA support. If vecgeom depends on vecgeomcuda, users don't need to change anything when enable CUDA support (and CudaRdcUtils will handle the dependency). However, if vecgeom depends on vecgeomcuda there is cases where some symbols in vecgeomcuda are not resolved (somehow the link should do another pass and/or we would need to add vecgeom 'again' after vecgeomcuda). Conclusion there is no good way to handle this split of library with circular dependency and as is, it does not help to build a non-cuda executable with a CUDA enable build of VecGeom. So there is 2 solution: 1. Tweak the code and move things around to that vecgeom no longer depends on vecgeomcuda. 2. Merge the 2 libraries into one. Until there is a strong business case for implementing 1., this commit does the much easier 2. * Remove explict dependence on vecgeomcuda (now implicit through vecgeom) * Have libvecgeom depends on libvecgeomcuda. This is the more natural dependency as user we will explicitly mention libvecgeom * Import CudaRdcUtils v6: object lib and CUDA runtime. This version avoids duplicating the object libraries onto unintended targets. It also improves the handling of CUDA_RUNTIME_LIBRARY: - fix determination when not explicitly set - export CUDA_RUNTIME_LIBRARY to avoid guessing - fix the promotion and checks if there is a mismatch between current and dependent library * Import CudaRdcUtils v4: improvement of linked library handling * Add message about ignoring RDC installation * Load CudaRdcUtils when loading VecGeom Targets * Move CudaRdcUtils to cmake/external. The official repository is https://github.com/celeritas-project/CudaRdcUtils * CudaRdcUtils is changed, so update to v2 * CudaRdcUtils is changed, so update to v2 * Migrate to using CudaRdcUtils Because we need to iterate through the list of sources and CMake does not support nested list, we can not use generator expression in the list of sources (nor in the list libraries) * fix typo in casing of ROOT::Core * fix silly copy/paste typo * Add support for CMake 3.22 (only if no CUDA) * fix typo * Migrate to using CudaRdcUtils Because we need to iterate through the list of sources and CMake does not support nested list, we can not use generator expression in the list of sources (nor in the list libraries)
VecGeom::vecgeomcuda and VecGeom::vecgeom has circular dependencies. vecgeomcuda uses the shape function implemented in vecgeom. vecgeom is requires some code that exist only in vecgeomcuda (eg. size of the some of the content of the namespace vecgeom::cuda) Users will link against VecGeom::vecgeom in all cases, in particular when building without CUDA enabled. If vecgeomcuda depends on vecgeom (the case for v1.2.10 and older) then users needs to explicitly add the dependency on vecgeomcuda when enable CUDA support. If vecgeom depends on vecgeomcuda, users don't need to change anything when enable CUDA support (and CudaRdcUtils will handle the dependency). However, if vecgeom depends on vecgeomcuda there is cases where some symbols in vecgeomcuda are not resolved (somehow the link should do another pass and/or we would need to add vecgeom 'again' after vecgeomcuda). Conclusion there is no good way to handle this split of library with circular dependency and as is, it does not help to build a non-cuda executable with a CUDA enable build of VecGeom. So there is 2 solution: 1. Tweak the code and move things around to that vecgeom no longer depends on vecgeomcuda. 2. Merge the 2 libraries into one. Until there is a strong business case for implementing 1., this commit does the much easier 2. * Add backward compatible ALIAS VecGeom::vecgeomcuda_static * Increment CudaRdcUtils version to 10 * Always explictly add cuda runtime to executable. If the executable source uses the CUDA API only in C++ files, cmake can not detect the need to add the CUDA runtime library. However since the type of the runtime library (static vs shared) depends on the time of the depend library, it is not easy for the user to known which one to add. For now, just always add the proper runtime library to the executable runtime library. * Detect late addition of CUDA source files. If target_sources is called explicitly the target is turned into an RDC executable only at the next call and to cuda_rdc_target_link_libraries and there we need to make sure to turn on the RDC linking. Note: if the same happens for a library, it might not yet be supported either. * Increment CudaRdcUtils version to 8 * cuda_rdc_check_cuda_runtime: ignore non-targets. An example of entity that will be a non target but is on the list to be looked at include the target that were brought in by inside a nested include and was not declared global; the target introduced by that find_package will go "out of scope" when the scope the `find_package` was called in ends. * Prepend reversed order of (static) libs as needed for static builds. * Change default to shared library ON * fix typo in test/VecGeomTest/CMakeLists.txt. This affects the case of G4 build without Geant4::G4persistency * Fold libvecgeomcuda into libvecgeom VecGeom::vecgeomcuda and VecGeom::vecgeom has circular dependencies. vecgeomcuda uses the shape function implemented in vecgeom. vecgeom is requires some code that exist only in vecgeomcuda (eg. size of the some of the content of the namespace vecgeom::cuda) Users will link against VecGeom::vecgeom in all cases, in particular when building without CUDA enabled. If vecgeomcuda depends on vecgeom (the case for v1.2.10 and older) then users needs to explicitly add the dependency on vecgeomcuda when enable CUDA support. If vecgeom depends on vecgeomcuda, users don't need to change anything when enable CUDA support (and CudaRdcUtils will handle the dependency). However, if vecgeom depends on vecgeomcuda there is cases where some symbols in vecgeomcuda are not resolved (somehow the link should do another pass and/or we would need to add vecgeom 'again' after vecgeomcuda). Conclusion there is no good way to handle this split of library with circular dependency and as is, it does not help to build a non-cuda executable with a CUDA enable build of VecGeom. So there is 2 solution: 1. Tweak the code and move things around to that vecgeom no longer depends on vecgeomcuda. 2. Merge the 2 libraries into one. Until there is a strong business case for implementing 1., this commit does the much easier 2. * Remove explict dependence on vecgeomcuda (now implicit through vecgeom) * Have libvecgeom depends on libvecgeomcuda. This is the more natural dependency as user we will explicitly mention libvecgeom * Import CudaRdcUtils v6: object lib and CUDA runtime. This version avoids duplicating the object libraries onto unintended targets. It also improves the handling of CUDA_RUNTIME_LIBRARY: - fix determination when not explicitly set - export CUDA_RUNTIME_LIBRARY to avoid guessing - fix the promotion and checks if there is a mismatch between current and dependent library * Import CudaRdcUtils v4: improvement of linked library handling * Add message about ignoring RDC installation * Load CudaRdcUtils when loading VecGeom Targets * Move CudaRdcUtils to cmake/external. The official repository is https://github.com/celeritas-project/CudaRdcUtils * CudaRdcUtils is changed, so update to v2 * CudaRdcUtils is changed, so update to v2 * Migrate to using CudaRdcUtils Because we need to iterate through the list of sources and CMake does not support nested list, we can not use generator expression in the list of sources (nor in the list libraries) * fix typo in casing of ROOT::Core * fix silly copy/paste typo * Add support for CMake 3.22 (only if no CUDA) * fix typo * Migrate to using CudaRdcUtils Because we need to iterate through the list of sources and CMake does not support nested list, we can not use generator expression in the list of sources (nor in the list libraries)
Introduction
VecGeom is a geometry modeller library with hit-detection features as needed by particle detector simulation at the LHC and beyond. It was incubated by a Geant-R&D initiative and the motivation to combine the code of Geant4 and ROOT/TGeo into a single, better maintainable piece of software within the EU-AIDA program. As such it is close in scope to TGeo and Geant4 geometry modellers. Its main features are:
- Build a hierarchic detector geometry out of simple primitives and use it on the CPU or GPU(CUDA)
- Calculate distances and other geometrical information
- Collision detection and navigation in complex scenes
- SIMD support in various flavours:
- True vector interfaces to primitives with SIMD acceleration when benefical
- SIMD acceleration of navigation through the use of special voxelization or bounding box hierarchies
- Runtime specialization of objects to improve execution speed via a factory mechanism and use of C++ templates
- VecGeom also compiles under CUDA
- Few generic kernels serve many instanteations of various simple or vectored interfaces or the CUDA version.
Building/Installing VecGeom
Requirements
- CMake 3.16 or newer for configuration, plus a suitable build tool such as GNU make or Ninja
- C++ compiler supporting a minimum ISO Standard of 17
- The library has been tested to compile with (but is not necessarily limited to):
- GCC >= 8
- Clang >= 10
- The library has been tested to compile with (but is not necessarily limited to):
-
VecCore version 0.8.0 or newer
- VecGeom can build/install its own copy of VecCore by setting the CMake variable
VECGEOM_BUILTIN_VECCORE
toON
- VecGeom can build/install its own copy of VecCore by setting the CMake variable
-
Optional
- Vc 1.3.3 or newer for SIMD support
- Xerces-C for GDML support
-
CUDA Toolkit 11.0 or newer for CUDA support
- An NVidia GPU with sufficient compute capability must be present on the host system
- It is recommended to use CMake 3.18 or newer for the better native CUDA support
Quick Start
$ mkdir build && cd build
$ cmake ..
$ cmake --build .
$ cmake --build . --target install
Build Options
The table below shows the available CMake options for VecGeom that may be used to customize the build:
Option | Default | Description |
---|---|---|
VECGEOM_BACKEND | scalar | Vector backend API to be used (scalar/vc) |
VECGEOM_BUILTIN_VECCORE | OFF | Build VecCore and its dependencies from source |
VECGEOM_CUDA_VOLUME_SPECIALIZATION | OFF | Use specialized volumes for CUDA |
VECGEOM_DISTANCE_DEBUG | OFF | Enable comparison of calculated distances against ROOT/Geant4 behind the scenes |
VECGEOM_EMBREE | OFF | Enable Intel Embree |
VECGEOM_ENABLE_CUDA | OFF | Enable compilation for CUDA |
VECGEOM_FAST_MATH | OFF | Enable the -ffast-math compiler option in Release builds |
VECGEOM_GDML | OFF | Enable GDML persistency. Requires Xerces-C |
VECGEOM_GDMLDEBUG | OFF | Enable additional debug information in GDML module |
VECGEOM_BVH_SINGLE | ON if surface | Enable single precision for BVH traversal |
VECGEOM_INPLACE_TRANSFORMATIONS | ON | Put transformation as members rather than pointers into PlacedVolume objects |
VECGEOM_NO_SPECIALIZATION | ON | Disable specialization of volumes |
VECGEOM_PLANESHELL | ON | Enable the use of PlaneShell class for the trapezoid |
VECGEOM_QUADRILATERAL_ACCELERATION | ON | Enable SIMD vectorization when looping over quadrilaterals |
VECGEOM_SANITIZER | OFF | Enable memory sanitizer |
VECGEOM_SINGLE_PRECISION | OFF | Use single precision throughout the package |
VECGEOM_USE_CACHED_TRANSFORMATIONS | OFF | Use cached transformations in navigation states |
VECGEOM_USE_INDEXEDNAVSTATES | ON | Use indices rather than volume pointers when VECGEOM_NAV=path |
VECGEOM_USE_SURF | ON if CUDA | Enable surface model for navigation |
VECGEOM_NAV | tuple | Navigation state implementation (tuple/index/path) |
VECGEOM_NAVTUPLE_MAXDEPTH | 4 | Maximum depth for navigation tuple states |
VECGEOM_VECTOR | sse2 if x86 | Vector instruction set to be used |
The VECGEOM_NAV
option supports the path
value only if the surface implementation
is disabled.
The following options are available for enabling, building, and running tests:
Option | Default | Description |
---|---|---|
BUILD_TESTING | ON | Enable build of tests and integration with CTest |
VECGEOM_GEANT4 | OFF | Build with support for Geant4 (https://geant4.web.cern.ch) |
VECGEOM_ROOT | OFF | Build with support for ROOT (https://root.cern) |
VECGEOM_TEST_BENCHMARK | OFF | Enable performance comparisons |
VECGEOM_TEST_COVERAGE | OFF | Enable coverage testing flags |
VECGEOM_TEST_STATIC_ANALYSIS | OFF | Enable static analysis on VecGeom |
VECGEOM_TEST_VALIDATION | OFF | Enable validation tests from CMS geometry |
Please note that the Geant4 and ROOT options are only for testing and should not be enabled in builds for production use. Both Geant4 and ROOT provide their own interfaces for use of VecGeom by their consumers.
Documentation
Note on using VecGeom CUDA Support
If VecGeom is built with VECGEOM_ENABLE_CUDA
, then it is still usable by CPU-only code but
you must link your binaries to both the vecgeom
and vecgeomcuda
libraries.
If your code uses VecGeom's CUDA interface in device code or kernels, then it must either:
-
Link and device-link to the
vecgeomcuda
target using the cuda_rdc cmake functions. In CMake, this is automatically handled by, e.g.find_package(VecGeom) add_executable(MyCUDA MyCUDA.cu) set_target_properties(MyCUDA PROPERTIES CUDA_SEPARABLE_COMPILATION ON) cuda_rdc_target_link_libraries(MyCUDA PRIVATE VecGeom::vecgeomcuda)
-
Link to the
vecgeomcuda
shared library, and device-link to thevecgeomcuda_static
target, using the cuda_rdc CMake functions:find_package(VecGeom) cuda_rdc_add_library(MyCUDA MyCUDA.cu) cuda_rdc_set_target_properties(MyCUDA PROPERTIES CUDA_SEPARABLE_COMPILATION ON) cuda_rdc_target_link_libraries(MyCUDA PRIVATE VecGeom::vecgeomcuda)
The cuda_rdc CMake function (see CudaRdcUtils.cmake for more details) will handle any further
device-linking of vecgeomcuda
using libraries that may be required if these libraries
expose device/kernel interfaces.
Bug Reports
Please report all issues on our JIRA Issue tracking system