HPCW 3.0
|
HPCW is used to build and run several projects, following several steps:
These steps are described below and through examples.
The build wrapper (found in the toolchains
directory) is intended to simplify the usage of HPCW by combining most of the steps outlined below into a single script. Its usage is highly recommended!
:warning: Although it is possible to build everything at once, it is advised to build and run each model (or project) one at a time.
Files of interest:
versions.cmake
: describe project and dependency versions, as well as URL, git repo, etc.;toolchains/
: define the compiler and user provided libraries;projects/
: rules to build projects and dependencies;downloads/
: destination folder for downloads, can be populated beforehand;The dependencies are internally built, except for some of them that can be provided by the system (see section "Configure" below). Dependencies are downloaded to the hpcw-store
directory on the fly. If an internet connection is not available, the user should manually go to the downloads
directory and invoke CMake:
If your cluster has no internet access, do this step on a machine that has a working internet connection. Then upload/synchronize the hpcw-store
inside your hpcw
directory on the cluster.
Create a directory where HPCW will be built. $HPCW_SOURCE_DIR
should point to the HPCW root folder :
Use an existing environment and toolchain (or create your own, see here or here):
Several options can be defined within HPCW; you could either use ccmake
(or cmake-gui
) or define them for use in the previous command.
:warning: Note: toolchain environments populate $cmakeFlags
. Please add custom options before sourcing the toolchain.
:warning: BEWARE, we strongly recommend to create, configure, and build one benchmark at a time in a separate directory.
You can also enable the generation of the documentation via Doxygen for offline viewing by setting the CMake flag -DENABLE_doc=ON
, either in isolation or alongside the build of any benchmark.
The user can provide their own implementations of BLAS, HDF5 or NetCDF. The user is reponsible for setting up the environment properly. Examples:
-DUSE_SYSTEM_blas=ON
-DUSE_SYSTEM_hdf5=ON
-DUSE_SYSTEM_hdf5=ON -DUSE_SYSTEM_netcdf=ON
An exhaustive list of all the dependencies that can be provided by the system is found here.
We provide an example for ICON where NetCDF and HDF5 are compiled separately. Further information regarding the configuration and build process of ICON can be found here.
:information_source: Advice: for CMake to find your own MPI, please ensure that the mpiexec
binary directory is in your $PATH
.
To build HPCW, launch make:
You do not need to pass the -j
flag to enable parallel compilation. All the projects obey to the parallel compilation level defined by the option BUILD_PARALLEL_LEVEL
or the custom {NEMO,XIOS,ICON}_BUILD_PARALLEL_LEVEL
for nemo, xios and icon.
:warning: Other CMake backends (e.g., ninja) have not been tested.
All projects are launched using:
Note: Some projects may require additional input files. See below.
Certain tests require additional input. This is the case for the following projects:
HPCW utilizes CMake's ExternalData
to fetch these inputs. As such, the input data will be downloaded at BUILD time and stored in a local object store (a directory called hpcw-store
in $HPCW_SOURCE_DIR
) and the build tree.
The script job-stagein.sh
is used to prepare some of the input data needed for test jobs:
:warning: Note: When working on a parallel file system, please consider using a striping of the input and build directory. For lustre this can be done with
or with
for already existing files (i.e. after HPCW/CMake's ExternalData
have populated the required inputs).
Please set the STRIPE_COUNT
according to your hardware.
Input files are downloaded to the hpcw-store
directory. If an internet connection is not available, the user should manually go to the inputs
directory and invoke CMake:
If your cluster has no internet access, do this step on a machine that has a working internet connection. Then upload/synchronize the hpcw-store
inside your hpcw
directory on the cluster.
Results can be automatically extracted from the log files using the analyse.py
Python script. It can output CSV or JSON format from a given list of log files :
This is intended to be extended to support the Verification and Validation processes and also to be used to extract the scoring. The build wrapper runs this analysis automatically, unless it is explicitly disabled. It creates both a CSV file and an openmetrics file, located in the build directory.