Commit 64871645 authored by Daniel Campora's avatar Daniel Campora
Browse files

Updated readmes.

parent 41b51c2a
......@@ -225,13 +225,12 @@ void saxpy::saxpy_t::set_arguments_size(
}
```
To do that, arguments provides several methods:
To do that, one may use the following functions:
* `set_size<T>(const size_t)`: Sets the size of _name_ `T`. The `sizeof(T)` is implicit, so eg. `set_size<T>(10)` will actually allocate space for `10 * sizeof(T)`.
* `size<T>()`: Gets the size of _name_ `T`.
* `data<T>()`: Gets the pointer to the beginning of `T`.
* `first<T>()`: Gets the first element of `T`.
* `name<T>()`: Gets the name of `T`.
* `void set_size<T>(arguments, const size_t)`: Sets the size of _name_ `T`. The `sizeof(T)` is implicit, so eg. `set_size<T>(10)` will actually allocate space for `10 * sizeof(T)`.
* `size_t size<T>(arguments)`: Gets the size of _name_ `T`.
* `T* data<T>(arguments)`: Gets the pointer to the beginning of `T`.
* `T first<T>(arguments)`: Gets the first element of `T`.
Next, `operator()` should be defined:
......@@ -253,6 +252,7 @@ void saxpy::saxpy_t::operator()(
In order to invoke host and global functions, wrapper methods `host_function` and `global_function` should be used. The syntax is as follows:
host_function(<host_function_identifier>)(<parameters of function>)
global_function(<global_function_identifier>)(<grid_size>, <block_size>, <stream>)(<parameters of function>)
`global_function` wraps a function identifier, such as `saxpy`. The object it returns can be used to invoke a _global kernel_ following a syntax that is similar to [CUDA's kernel invocation syntax](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#kernels). It expects:
......@@ -303,4 +303,4 @@ In other words, in the code above:
The last thing remaining is to add the algorithm to a sequence, and run it.
* [This readme](configuration/readme.md) explains how to configure the algorithms in an HLT1 sequence.
\ No newline at end of file
* [This readme](configuration/readme.md) explains how to configure the algorithms in an HLT1 sequence.
......@@ -86,6 +86,7 @@ Alternatively, cmake options can be passed with `-D` when invoking the cmake com
* `HIP_ARCH` - Selects the architecture to target with `HIP` compilation.
### As a Gaudi/LHCb project
Two ways of calling Allen with Gaudi exist:
1. Use Gaudi to update non-event data such as alignment and configuration constants and use Moore to steer the event loop and call Allen one event at a time (this method will be used for the simulation workflow and efficiency studies)
......@@ -145,7 +146,7 @@ any other Gaudi/LHCb project:
make install
##### Build options
By default the `DefaultSequence` is selected, Allen is built with
By default the `hlt1_pp_default` sequence is selected, Allen is built with
CUDA, and the CUDA stack is searched for in `/usr/local/cuda`. These
defaults (and other cmake variables) can be changed by adding the same
flags that you would pass to a standalone build to the `CMAKEFLAGS`
......@@ -231,9 +232,9 @@ For development purposes, a server with eight GeForce RTX 2080 Ti GPUs is set up
An online account is required to access it. If you need to create one, please send a request to [mailto:lbonsupp@cern.ch](lbonsupp@cern.ch).
Enter the online network from lxplus with `ssh lbgw`. Then `ssh n4050101` to reach the GPU server.
* Upon login, a GPU node will be automatically assigned to you.
* Upon login, a GPU will be automatically assigned to you.
* A development environment is set (`gcc 8.2.0`, `cmake 3.14`, ROOT, NVIDIA binary path is added).
* Allen input data is available locally under `/scratch/allen_data/201907`.
* Allen input data is available locally under `/scratch/allen_data`.
### How to measure throughput
......@@ -243,7 +244,7 @@ The results of the tests are published in this [mattermost channel](https://matt
For local throughput measurements, we recommend the following settings in Allen standalone mode:
```console
nvprof ./Allen -f /scratch/allen_data/201907/minbias_mag_down -n 1000 -m 700 -r 100 -t 12 -c 0
nvprof ./Allen -f /scratch/allen_data/minbias_mag_down -n 1000 -m 700 -r 100 -t 12 -c 0
```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment